|
[2023-08-30 09:42:07,326][00929] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2023-08-30 09:42:07,328][00929] Rollout worker 0 uses device cpu |
|
[2023-08-30 09:42:07,331][00929] Rollout worker 1 uses device cpu |
|
[2023-08-30 09:42:07,333][00929] Rollout worker 2 uses device cpu |
|
[2023-08-30 09:42:07,334][00929] Rollout worker 3 uses device cpu |
|
[2023-08-30 09:42:07,335][00929] Rollout worker 4 uses device cpu |
|
[2023-08-30 09:42:07,336][00929] Rollout worker 5 uses device cpu |
|
[2023-08-30 09:42:07,338][00929] Rollout worker 6 uses device cpu |
|
[2023-08-30 09:42:07,342][00929] Rollout worker 7 uses device cpu |
|
[2023-08-30 09:42:07,518][00929] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-30 09:42:07,523][00929] InferenceWorker_p0-w0: min num requests: 2 |
|
[2023-08-30 09:42:07,572][00929] Starting all processes... |
|
[2023-08-30 09:42:07,576][00929] Starting process learner_proc0 |
|
[2023-08-30 09:42:07,649][00929] Starting all processes... |
|
[2023-08-30 09:42:07,658][00929] Starting process inference_proc0-0 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc0 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc1 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc2 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc3 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc4 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc5 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc6 |
|
[2023-08-30 09:42:07,659][00929] Starting process rollout_proc7 |
|
[2023-08-30 09:42:25,337][08367] Worker 5 uses CPU cores [1] |
|
[2023-08-30 09:42:25,537][08365] Worker 3 uses CPU cores [1] |
|
[2023-08-30 09:42:25,623][08348] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-30 09:42:25,624][08348] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2023-08-30 09:42:25,691][08369] Worker 7 uses CPU cores [1] |
|
[2023-08-30 09:42:25,720][08348] Num visible devices: 1 |
|
[2023-08-30 09:42:25,755][08348] Starting seed is not provided |
|
[2023-08-30 09:42:25,755][08348] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-30 09:42:25,755][08348] Initializing actor-critic model on device cuda:0 |
|
[2023-08-30 09:42:25,756][08348] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 09:42:25,758][08348] RunningMeanStd input shape: (1,) |
|
[2023-08-30 09:42:25,772][08366] Worker 4 uses CPU cores [0] |
|
[2023-08-30 09:42:25,869][08364] Worker 2 uses CPU cores [0] |
|
[2023-08-30 09:42:25,888][08348] ConvEncoder: input_channels=3 |
|
[2023-08-30 09:42:25,944][08363] Worker 1 uses CPU cores [1] |
|
[2023-08-30 09:42:25,984][08361] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-30 09:42:25,984][08361] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2023-08-30 09:42:26,039][08361] Num visible devices: 1 |
|
[2023-08-30 09:42:26,213][08368] Worker 6 uses CPU cores [0] |
|
[2023-08-30 09:42:26,248][08362] Worker 0 uses CPU cores [0] |
|
[2023-08-30 09:42:26,497][08348] Conv encoder output size: 512 |
|
[2023-08-30 09:42:26,498][08348] Policy head output size: 512 |
|
[2023-08-30 09:42:26,564][08348] Created Actor Critic model with architecture: |
|
[2023-08-30 09:42:26,564][08348] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2023-08-30 09:42:27,508][00929] Heartbeat connected on Batcher_0 |
|
[2023-08-30 09:42:27,519][00929] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2023-08-30 09:42:27,530][00929] Heartbeat connected on RolloutWorker_w0 |
|
[2023-08-30 09:42:27,537][00929] Heartbeat connected on RolloutWorker_w1 |
|
[2023-08-30 09:42:27,544][00929] Heartbeat connected on RolloutWorker_w2 |
|
[2023-08-30 09:42:27,549][00929] Heartbeat connected on RolloutWorker_w3 |
|
[2023-08-30 09:42:27,555][00929] Heartbeat connected on RolloutWorker_w4 |
|
[2023-08-30 09:42:27,560][00929] Heartbeat connected on RolloutWorker_w5 |
|
[2023-08-30 09:42:27,568][00929] Heartbeat connected on RolloutWorker_w6 |
|
[2023-08-30 09:42:27,571][00929] Heartbeat connected on RolloutWorker_w7 |
|
[2023-08-30 09:42:35,028][08348] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2023-08-30 09:42:35,029][08348] No checkpoints found |
|
[2023-08-30 09:42:35,029][08348] Did not load from checkpoint, starting from scratch! |
|
[2023-08-30 09:42:35,029][08348] Initialized policy 0 weights for model version 0 |
|
[2023-08-30 09:42:35,032][08348] LearnerWorker_p0 finished initialization! |
|
[2023-08-30 09:42:35,033][08348] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2023-08-30 09:42:35,033][00929] Heartbeat connected on LearnerWorker_p0 |
|
[2023-08-30 09:42:35,129][08361] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 09:42:35,130][08361] RunningMeanStd input shape: (1,) |
|
[2023-08-30 09:42:35,142][08361] ConvEncoder: input_channels=3 |
|
[2023-08-30 09:42:35,242][08361] Conv encoder output size: 512 |
|
[2023-08-30 09:42:35,242][08361] Policy head output size: 512 |
|
[2023-08-30 09:42:35,356][00929] Inference worker 0-0 is ready! |
|
[2023-08-30 09:42:35,358][00929] All inference workers are ready! Signal rollout workers to start! |
|
[2023-08-30 09:42:35,723][08363] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,727][08367] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,729][08365] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,726][08369] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,773][08366] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,779][08362] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,775][08364] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:35,777][08368] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 09:42:36,775][00929] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-30 09:42:36,781][08362] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:36,781][08366] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:37,120][08365] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:37,125][08363] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:37,128][08369] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:37,958][08362] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:37,956][08366] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:38,318][08365] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:38,321][08369] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:38,326][08363] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:38,855][08368] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:38,861][08364] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:39,951][08367] Decorrelating experience for 0 frames... |
|
[2023-08-30 09:42:39,984][08369] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:40,501][08362] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:40,515][08366] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:40,865][08364] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:40,874][08368] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:41,746][08367] Decorrelating experience for 32 frames... |
|
[2023-08-30 09:42:41,775][00929] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-30 09:42:41,980][08365] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:42,294][08362] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:42,408][08363] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:43,235][08368] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:44,299][08369] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:44,318][08364] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:44,462][08365] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:45,198][08363] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:46,471][08366] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:46,775][00929] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 5.8. Samples: 58. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-30 09:42:46,780][00929] Avg episode reward: [(0, '1.608')] |
|
[2023-08-30 09:42:47,577][08367] Decorrelating experience for 64 frames... |
|
[2023-08-30 09:42:48,552][08364] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:48,916][08368] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:51,087][08348] Signal inference workers to stop experience collection... |
|
[2023-08-30 09:42:51,099][08361] InferenceWorker_p0-w0: stopping experience collection |
|
[2023-08-30 09:42:51,284][08367] Decorrelating experience for 96 frames... |
|
[2023-08-30 09:42:51,775][00929] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 148.3. Samples: 2224. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2023-08-30 09:42:51,777][00929] Avg episode reward: [(0, '2.857')] |
|
[2023-08-30 09:42:54,842][08348] Signal inference workers to resume experience collection... |
|
[2023-08-30 09:42:54,843][08361] InferenceWorker_p0-w0: resuming experience collection |
|
[2023-08-30 09:42:56,775][00929] Fps is (10 sec: 409.6, 60 sec: 204.8, 300 sec: 204.8). Total num frames: 4096. Throughput: 0: 178.2. Samples: 3564. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2023-08-30 09:42:56,781][00929] Avg episode reward: [(0, '2.972')] |
|
[2023-08-30 09:43:01,775][00929] Fps is (10 sec: 2457.6, 60 sec: 983.0, 300 sec: 983.0). Total num frames: 24576. Throughput: 0: 213.4. Samples: 5334. Policy #0 lag: (min: 0.0, avg: 0.9, max: 3.0) |
|
[2023-08-30 09:43:01,787][00929] Avg episode reward: [(0, '3.534')] |
|
[2023-08-30 09:43:06,775][00929] Fps is (10 sec: 3276.8, 60 sec: 1228.8, 300 sec: 1228.8). Total num frames: 36864. Throughput: 0: 299.1. Samples: 8974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:43:06,780][00929] Avg episode reward: [(0, '3.730')] |
|
[2023-08-30 09:43:08,284][08361] Updated weights for policy 0, policy_version 10 (0.0020) |
|
[2023-08-30 09:43:11,775][00929] Fps is (10 sec: 2457.6, 60 sec: 1404.3, 300 sec: 1404.3). Total num frames: 49152. Throughput: 0: 374.5. Samples: 13108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:43:11,777][00929] Avg episode reward: [(0, '4.289')] |
|
[2023-08-30 09:43:16,775][00929] Fps is (10 sec: 3276.8, 60 sec: 1740.8, 300 sec: 1740.8). Total num frames: 69632. Throughput: 0: 400.2. Samples: 16010. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:43:16,777][00929] Avg episode reward: [(0, '4.534')] |
|
[2023-08-30 09:43:19,524][08361] Updated weights for policy 0, policy_version 20 (0.0015) |
|
[2023-08-30 09:43:21,775][00929] Fps is (10 sec: 3686.4, 60 sec: 1911.5, 300 sec: 1911.5). Total num frames: 86016. Throughput: 0: 486.0. Samples: 21870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:43:21,777][00929] Avg episode reward: [(0, '4.473')] |
|
[2023-08-30 09:43:26,783][00929] Fps is (10 sec: 2865.0, 60 sec: 1965.8, 300 sec: 1965.8). Total num frames: 98304. Throughput: 0: 566.1. Samples: 25480. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:43:26,791][00929] Avg episode reward: [(0, '4.590')] |
|
[2023-08-30 09:43:31,777][00929] Fps is (10 sec: 2047.6, 60 sec: 1936.2, 300 sec: 1936.2). Total num frames: 106496. Throughput: 0: 583.4. Samples: 26314. Policy #0 lag: (min: 0.0, avg: 0.3, max: 2.0) |
|
[2023-08-30 09:43:31,783][00929] Avg episode reward: [(0, '4.575')] |
|
[2023-08-30 09:43:31,796][08348] Saving new best policy, reward=4.575! |
|
[2023-08-30 09:43:35,778][08361] Updated weights for policy 0, policy_version 30 (0.0022) |
|
[2023-08-30 09:43:36,775][00929] Fps is (10 sec: 2459.5, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 122880. Throughput: 0: 632.9. Samples: 30706. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:43:36,777][00929] Avg episode reward: [(0, '4.546')] |
|
[2023-08-30 09:43:41,775][00929] Fps is (10 sec: 3687.1, 60 sec: 2389.4, 300 sec: 2205.5). Total num frames: 143360. Throughput: 0: 732.1. Samples: 36510. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:43:41,779][00929] Avg episode reward: [(0, '4.370')] |
|
[2023-08-30 09:43:46,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2594.1, 300 sec: 2223.5). Total num frames: 155648. Throughput: 0: 734.7. Samples: 38396. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:43:46,780][00929] Avg episode reward: [(0, '4.461')] |
|
[2023-08-30 09:43:49,380][08361] Updated weights for policy 0, policy_version 40 (0.0035) |
|
[2023-08-30 09:43:51,775][00929] Fps is (10 sec: 2457.5, 60 sec: 2798.9, 300 sec: 2239.1). Total num frames: 167936. Throughput: 0: 736.3. Samples: 42108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:43:51,778][00929] Avg episode reward: [(0, '4.437')] |
|
[2023-08-30 09:43:56,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2355.2). Total num frames: 188416. Throughput: 0: 755.6. Samples: 47108. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:43:56,777][00929] Avg episode reward: [(0, '4.320')] |
|
[2023-08-30 09:44:01,067][08361] Updated weights for policy 0, policy_version 50 (0.0021) |
|
[2023-08-30 09:44:01,775][00929] Fps is (10 sec: 3686.5, 60 sec: 3003.7, 300 sec: 2409.4). Total num frames: 204800. Throughput: 0: 756.1. Samples: 50034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:44:01,777][00929] Avg episode reward: [(0, '4.354')] |
|
[2023-08-30 09:44:01,788][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000050_204800.pth... |
|
[2023-08-30 09:44:06,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3003.7, 300 sec: 2412.1). Total num frames: 217088. Throughput: 0: 734.5. Samples: 54922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:44:06,779][00929] Avg episode reward: [(0, '4.413')] |
|
[2023-08-30 09:44:11,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3003.7, 300 sec: 2414.5). Total num frames: 229376. Throughput: 0: 737.9. Samples: 58680. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:44:11,782][00929] Avg episode reward: [(0, '4.545')] |
|
[2023-08-30 09:44:16,118][08361] Updated weights for policy 0, policy_version 60 (0.0014) |
|
[2023-08-30 09:44:16,775][00929] Fps is (10 sec: 2867.4, 60 sec: 2935.5, 300 sec: 2457.6). Total num frames: 245760. Throughput: 0: 760.4. Samples: 60530. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:44:16,778][00929] Avg episode reward: [(0, '4.568')] |
|
[2023-08-30 09:44:21,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 2535.6). Total num frames: 266240. Throughput: 0: 788.8. Samples: 66200. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:44:21,785][00929] Avg episode reward: [(0, '4.490')] |
|
[2023-08-30 09:44:26,775][00929] Fps is (10 sec: 3686.5, 60 sec: 3072.4, 300 sec: 2569.3). Total num frames: 282624. Throughput: 0: 774.2. Samples: 71350. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:44:26,781][00929] Avg episode reward: [(0, '4.379')] |
|
[2023-08-30 09:44:27,559][08361] Updated weights for policy 0, policy_version 70 (0.0022) |
|
[2023-08-30 09:44:31,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.4, 300 sec: 2564.5). Total num frames: 294912. Throughput: 0: 772.9. Samples: 73176. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:44:31,779][00929] Avg episode reward: [(0, '4.289')] |
|
[2023-08-30 09:44:36,777][00929] Fps is (10 sec: 2457.1, 60 sec: 3071.9, 300 sec: 2560.0). Total num frames: 307200. Throughput: 0: 771.7. Samples: 76836. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:44:36,779][00929] Avg episode reward: [(0, '4.273')] |
|
[2023-08-30 09:44:41,454][08361] Updated weights for policy 0, policy_version 80 (0.0039) |
|
[2023-08-30 09:44:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 2621.4). Total num frames: 327680. Throughput: 0: 787.3. Samples: 82538. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:44:41,777][00929] Avg episode reward: [(0, '4.324')] |
|
[2023-08-30 09:44:46,775][00929] Fps is (10 sec: 3687.1, 60 sec: 3140.3, 300 sec: 2646.6). Total num frames: 344064. Throughput: 0: 787.5. Samples: 85472. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:44:46,777][00929] Avg episode reward: [(0, '4.474')] |
|
[2023-08-30 09:44:51,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2670.0). Total num frames: 360448. Throughput: 0: 772.6. Samples: 89690. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:44:51,782][00929] Avg episode reward: [(0, '4.308')] |
|
[2023-08-30 09:44:55,002][08361] Updated weights for policy 0, policy_version 90 (0.0027) |
|
[2023-08-30 09:44:56,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 2662.4). Total num frames: 372736. Throughput: 0: 774.0. Samples: 93510. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:44:56,779][00929] Avg episode reward: [(0, '4.381')] |
|
[2023-08-30 09:45:01,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3072.0, 300 sec: 2683.6). Total num frames: 389120. Throughput: 0: 788.4. Samples: 96010. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:45:01,780][00929] Avg episode reward: [(0, '4.394')] |
|
[2023-08-30 09:45:06,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3140.3, 300 sec: 2703.4). Total num frames: 405504. Throughput: 0: 785.4. Samples: 101542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:45:06,780][00929] Avg episode reward: [(0, '4.537')] |
|
[2023-08-30 09:45:06,926][08361] Updated weights for policy 0, policy_version 100 (0.0021) |
|
[2023-08-30 09:45:11,775][00929] Fps is (10 sec: 3277.0, 60 sec: 3208.5, 300 sec: 2721.9). Total num frames: 421888. Throughput: 0: 769.7. Samples: 105986. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:45:11,779][00929] Avg episode reward: [(0, '4.537')] |
|
[2023-08-30 09:45:16,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 2713.6). Total num frames: 434176. Throughput: 0: 771.2. Samples: 107878. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:45:16,780][00929] Avg episode reward: [(0, '4.678')] |
|
[2023-08-30 09:45:16,782][08348] Saving new best policy, reward=4.678! |
|
[2023-08-30 09:45:21,471][08361] Updated weights for policy 0, policy_version 110 (0.0025) |
|
[2023-08-30 09:45:21,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 2730.7). Total num frames: 450560. Throughput: 0: 783.6. Samples: 112096. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:45:21,777][00929] Avg episode reward: [(0, '4.647')] |
|
[2023-08-30 09:45:26,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2770.8). Total num frames: 471040. Throughput: 0: 787.7. Samples: 117986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:45:26,778][00929] Avg episode reward: [(0, '4.503')] |
|
[2023-08-30 09:45:31,776][00929] Fps is (10 sec: 3276.5, 60 sec: 3140.2, 300 sec: 2761.9). Total num frames: 483328. Throughput: 0: 783.4. Samples: 120726. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:45:31,778][00929] Avg episode reward: [(0, '4.610')] |
|
[2023-08-30 09:45:33,648][08361] Updated weights for policy 0, policy_version 120 (0.0014) |
|
[2023-08-30 09:45:36,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3140.4, 300 sec: 2753.4). Total num frames: 495616. Throughput: 0: 773.8. Samples: 124510. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:45:36,778][00929] Avg episode reward: [(0, '4.637')] |
|
[2023-08-30 09:45:41,775][00929] Fps is (10 sec: 2867.4, 60 sec: 3072.0, 300 sec: 2767.6). Total num frames: 512000. Throughput: 0: 778.0. Samples: 128518. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:45:41,784][00929] Avg episode reward: [(0, '4.771')] |
|
[2023-08-30 09:45:41,794][08348] Saving new best policy, reward=4.771! |
|
[2023-08-30 09:45:46,704][08361] Updated weights for policy 0, policy_version 130 (0.0020) |
|
[2023-08-30 09:45:46,775][00929] Fps is (10 sec: 3686.3, 60 sec: 3140.3, 300 sec: 2802.5). Total num frames: 532480. Throughput: 0: 786.7. Samples: 131410. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:45:46,783][00929] Avg episode reward: [(0, '4.785')] |
|
[2023-08-30 09:45:46,786][08348] Saving new best policy, reward=4.785! |
|
[2023-08-30 09:45:51,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2814.7). Total num frames: 548864. Throughput: 0: 791.5. Samples: 137158. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:45:51,781][00929] Avg episode reward: [(0, '4.563')] |
|
[2023-08-30 09:45:56,775][00929] Fps is (10 sec: 2867.3, 60 sec: 3140.3, 300 sec: 2805.8). Total num frames: 561152. Throughput: 0: 774.1. Samples: 140822. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:45:56,779][00929] Avg episode reward: [(0, '4.692')] |
|
[2023-08-30 09:46:01,436][08361] Updated weights for policy 0, policy_version 140 (0.0019) |
|
[2023-08-30 09:46:01,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 2797.3). Total num frames: 573440. Throughput: 0: 771.9. Samples: 142612. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:46:01,781][00929] Avg episode reward: [(0, '4.649')] |
|
[2023-08-30 09:46:01,796][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000140_573440.pth... |
|
[2023-08-30 09:46:06,776][00929] Fps is (10 sec: 2457.4, 60 sec: 3003.7, 300 sec: 2789.2). Total num frames: 585728. Throughput: 0: 755.8. Samples: 146106. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:46:06,784][00929] Avg episode reward: [(0, '4.704')] |
|
[2023-08-30 09:46:11,777][00929] Fps is (10 sec: 2047.6, 60 sec: 2867.1, 300 sec: 2762.4). Total num frames: 593920. Throughput: 0: 702.8. Samples: 149612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:46:11,784][00929] Avg episode reward: [(0, '4.433')] |
|
[2023-08-30 09:46:16,775][00929] Fps is (10 sec: 2048.2, 60 sec: 2867.2, 300 sec: 2755.5). Total num frames: 606208. Throughput: 0: 677.5. Samples: 151214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:46:16,782][00929] Avg episode reward: [(0, '4.522')] |
|
[2023-08-30 09:46:18,991][08361] Updated weights for policy 0, policy_version 150 (0.0018) |
|
[2023-08-30 09:46:21,775][00929] Fps is (10 sec: 2458.1, 60 sec: 2798.9, 300 sec: 2748.9). Total num frames: 618496. Throughput: 0: 677.0. Samples: 154974. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:46:21,782][00929] Avg episode reward: [(0, '4.605')] |
|
[2023-08-30 09:46:26,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2760.3). Total num frames: 634880. Throughput: 0: 686.7. Samples: 159420. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:46:26,777][00929] Avg episode reward: [(0, '4.669')] |
|
[2023-08-30 09:46:31,628][08361] Updated weights for policy 0, policy_version 160 (0.0039) |
|
[2023-08-30 09:46:31,775][00929] Fps is (10 sec: 3686.5, 60 sec: 2867.2, 300 sec: 2788.8). Total num frames: 655360. Throughput: 0: 684.7. Samples: 162222. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:46:31,781][00929] Avg episode reward: [(0, '4.560')] |
|
[2023-08-30 09:46:36,775][00929] Fps is (10 sec: 3686.4, 60 sec: 2935.5, 300 sec: 2798.9). Total num frames: 671744. Throughput: 0: 675.2. Samples: 167542. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:46:36,779][00929] Avg episode reward: [(0, '4.498')] |
|
[2023-08-30 09:46:41,776][00929] Fps is (10 sec: 2867.0, 60 sec: 2867.2, 300 sec: 2792.0). Total num frames: 684032. Throughput: 0: 675.0. Samples: 171198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:46:41,782][00929] Avg episode reward: [(0, '4.593')] |
|
[2023-08-30 09:46:46,477][08361] Updated weights for policy 0, policy_version 170 (0.0040) |
|
[2023-08-30 09:46:46,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2785.3). Total num frames: 696320. Throughput: 0: 677.1. Samples: 173080. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:46:46,782][00929] Avg episode reward: [(0, '4.531')] |
|
[2023-08-30 09:46:51,775][00929] Fps is (10 sec: 2867.4, 60 sec: 2730.7, 300 sec: 2794.9). Total num frames: 712704. Throughput: 0: 718.2. Samples: 178424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:46:51,778][00929] Avg episode reward: [(0, '4.492')] |
|
[2023-08-30 09:46:56,775][00929] Fps is (10 sec: 3686.4, 60 sec: 2867.2, 300 sec: 2819.9). Total num frames: 733184. Throughput: 0: 765.2. Samples: 184046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:46:56,778][00929] Avg episode reward: [(0, '4.489')] |
|
[2023-08-30 09:46:57,634][08361] Updated weights for policy 0, policy_version 180 (0.0014) |
|
[2023-08-30 09:47:01,779][00929] Fps is (10 sec: 3275.4, 60 sec: 2867.0, 300 sec: 2813.1). Total num frames: 745472. Throughput: 0: 769.7. Samples: 185852. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:47:01,784][00929] Avg episode reward: [(0, '4.443')] |
|
[2023-08-30 09:47:06,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2806.5). Total num frames: 757760. Throughput: 0: 767.6. Samples: 189516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:47:06,777][00929] Avg episode reward: [(0, '4.673')] |
|
[2023-08-30 09:47:11,775][00929] Fps is (10 sec: 2868.4, 60 sec: 3003.8, 300 sec: 2815.1). Total num frames: 774144. Throughput: 0: 784.1. Samples: 194704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:47:11,781][00929] Avg episode reward: [(0, '4.841')] |
|
[2023-08-30 09:47:11,792][08348] Saving new best policy, reward=4.841! |
|
[2023-08-30 09:47:12,057][08361] Updated weights for policy 0, policy_version 190 (0.0026) |
|
[2023-08-30 09:47:16,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2837.9). Total num frames: 794624. Throughput: 0: 785.0. Samples: 197546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:47:16,783][00929] Avg episode reward: [(0, '5.025')] |
|
[2023-08-30 09:47:16,785][08348] Saving new best policy, reward=5.025! |
|
[2023-08-30 09:47:21,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3140.3, 300 sec: 2831.3). Total num frames: 806912. Throughput: 0: 767.6. Samples: 202082. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:47:21,780][00929] Avg episode reward: [(0, '5.094')] |
|
[2023-08-30 09:47:21,792][08348] Saving new best policy, reward=5.094! |
|
[2023-08-30 09:47:25,599][08361] Updated weights for policy 0, policy_version 200 (0.0013) |
|
[2023-08-30 09:47:26,776][00929] Fps is (10 sec: 2457.2, 60 sec: 3071.9, 300 sec: 2824.8). Total num frames: 819200. Throughput: 0: 769.0. Samples: 205802. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:47:26,783][00929] Avg episode reward: [(0, '5.117')] |
|
[2023-08-30 09:47:26,791][08348] Saving new best policy, reward=5.117! |
|
[2023-08-30 09:47:31,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3003.7, 300 sec: 2832.5). Total num frames: 835584. Throughput: 0: 772.8. Samples: 207858. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:47:31,778][00929] Avg episode reward: [(0, '5.087')] |
|
[2023-08-30 09:47:36,775][00929] Fps is (10 sec: 3687.0, 60 sec: 3072.0, 300 sec: 2901.9). Total num frames: 856064. Throughput: 0: 784.3. Samples: 213716. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:47:36,783][00929] Avg episode reward: [(0, '5.147')] |
|
[2023-08-30 09:47:36,786][08348] Saving new best policy, reward=5.147! |
|
[2023-08-30 09:47:37,267][08361] Updated weights for policy 0, policy_version 210 (0.0025) |
|
[2023-08-30 09:47:41,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2957.5). Total num frames: 872448. Throughput: 0: 767.8. Samples: 218596. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:47:41,783][00929] Avg episode reward: [(0, '5.142')] |
|
[2023-08-30 09:47:46,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 2999.1). Total num frames: 884736. Throughput: 0: 769.7. Samples: 220486. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:47:46,779][00929] Avg episode reward: [(0, '5.189')] |
|
[2023-08-30 09:47:46,784][08348] Saving new best policy, reward=5.189! |
|
[2023-08-30 09:47:51,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 897024. Throughput: 0: 773.4. Samples: 224320. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:47:51,782][00929] Avg episode reward: [(0, '5.157')] |
|
[2023-08-30 09:47:51,881][08361] Updated weights for policy 0, policy_version 220 (0.0019) |
|
[2023-08-30 09:47:56,775][00929] Fps is (10 sec: 3276.7, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 917504. Throughput: 0: 787.9. Samples: 230158. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:47:56,783][00929] Avg episode reward: [(0, '5.515')] |
|
[2023-08-30 09:47:56,785][08348] Saving new best policy, reward=5.515! |
|
[2023-08-30 09:48:01,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.5, 300 sec: 3040.8). Total num frames: 933888. Throughput: 0: 788.4. Samples: 233024. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:01,779][00929] Avg episode reward: [(0, '5.608')] |
|
[2023-08-30 09:48:01,796][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000228_933888.pth... |
|
[2023-08-30 09:48:01,933][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000050_204800.pth |
|
[2023-08-30 09:48:01,945][08348] Saving new best policy, reward=5.608! |
|
[2023-08-30 09:48:04,200][08361] Updated weights for policy 0, policy_version 230 (0.0030) |
|
[2023-08-30 09:48:06,775][00929] Fps is (10 sec: 2867.3, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 946176. Throughput: 0: 770.7. Samples: 236762. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:48:06,778][00929] Avg episode reward: [(0, '5.456')] |
|
[2023-08-30 09:48:11,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3013.0). Total num frames: 958464. Throughput: 0: 770.2. Samples: 240458. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:11,781][00929] Avg episode reward: [(0, '5.251')] |
|
[2023-08-30 09:48:16,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 978944. Throughput: 0: 787.4. Samples: 243290. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:16,781][00929] Avg episode reward: [(0, '5.006')] |
|
[2023-08-30 09:48:17,355][08361] Updated weights for policy 0, policy_version 240 (0.0030) |
|
[2023-08-30 09:48:21,775][00929] Fps is (10 sec: 4096.1, 60 sec: 3208.5, 300 sec: 3054.7). Total num frames: 999424. Throughput: 0: 789.6. Samples: 249248. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:48:21,779][00929] Avg episode reward: [(0, '5.384')] |
|
[2023-08-30 09:48:26,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3208.6, 300 sec: 3068.5). Total num frames: 1011712. Throughput: 0: 773.3. Samples: 253394. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:48:26,781][00929] Avg episode reward: [(0, '5.643')] |
|
[2023-08-30 09:48:26,783][08348] Saving new best policy, reward=5.643! |
|
[2023-08-30 09:48:31,523][08361] Updated weights for policy 0, policy_version 250 (0.0024) |
|
[2023-08-30 09:48:31,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3140.3, 300 sec: 3054.6). Total num frames: 1024000. Throughput: 0: 770.7. Samples: 255168. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:48:31,782][00929] Avg episode reward: [(0, '5.751')] |
|
[2023-08-30 09:48:31,799][08348] Saving new best policy, reward=5.751! |
|
[2023-08-30 09:48:36,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1040384. Throughput: 0: 784.7. Samples: 259632. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:36,777][00929] Avg episode reward: [(0, '5.916')] |
|
[2023-08-30 09:48:36,781][08348] Saving new best policy, reward=5.916! |
|
[2023-08-30 09:48:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 1056768. Throughput: 0: 781.2. Samples: 265312. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:48:41,782][00929] Avg episode reward: [(0, '5.853')] |
|
[2023-08-30 09:48:43,035][08361] Updated weights for policy 0, policy_version 260 (0.0013) |
|
[2023-08-30 09:48:46,780][00929] Fps is (10 sec: 3275.2, 60 sec: 3140.0, 300 sec: 3068.5). Total num frames: 1073152. Throughput: 0: 769.1. Samples: 267636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:46,787][00929] Avg episode reward: [(0, '6.014')] |
|
[2023-08-30 09:48:46,792][08348] Saving new best policy, reward=6.014! |
|
[2023-08-30 09:48:51,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3140.2, 300 sec: 3040.8). Total num frames: 1085440. Throughput: 0: 766.1. Samples: 271236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:51,780][00929] Avg episode reward: [(0, '6.219')] |
|
[2023-08-30 09:48:51,793][08348] Saving new best policy, reward=6.219! |
|
[2023-08-30 09:48:56,778][00929] Fps is (10 sec: 2458.1, 60 sec: 3003.6, 300 sec: 3026.8). Total num frames: 1097728. Throughput: 0: 778.0. Samples: 275468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:48:56,780][00929] Avg episode reward: [(0, '7.214')] |
|
[2023-08-30 09:48:56,782][08348] Saving new best policy, reward=7.214! |
|
[2023-08-30 09:48:58,067][08361] Updated weights for policy 0, policy_version 270 (0.0031) |
|
[2023-08-30 09:49:01,775][00929] Fps is (10 sec: 3277.1, 60 sec: 3072.0, 300 sec: 3054.7). Total num frames: 1118208. Throughput: 0: 778.8. Samples: 278336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:49:01,782][00929] Avg episode reward: [(0, '7.577')] |
|
[2023-08-30 09:49:01,797][08348] Saving new best policy, reward=7.577! |
|
[2023-08-30 09:49:06,775][00929] Fps is (10 sec: 3687.4, 60 sec: 3140.3, 300 sec: 3068.5). Total num frames: 1134592. Throughput: 0: 771.0. Samples: 283944. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:49:06,779][00929] Avg episode reward: [(0, '7.652')] |
|
[2023-08-30 09:49:06,782][08348] Saving new best policy, reward=7.652! |
|
[2023-08-30 09:49:10,947][08361] Updated weights for policy 0, policy_version 280 (0.0019) |
|
[2023-08-30 09:49:11,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3054.6). Total num frames: 1146880. Throughput: 0: 759.9. Samples: 287590. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:49:11,780][00929] Avg episode reward: [(0, '7.267')] |
|
[2023-08-30 09:49:16,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 1159168. Throughput: 0: 761.0. Samples: 289414. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:49:16,783][00929] Avg episode reward: [(0, '7.293')] |
|
[2023-08-30 09:49:21,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3003.7, 300 sec: 3040.8). Total num frames: 1179648. Throughput: 0: 778.5. Samples: 294666. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:49:21,777][00929] Avg episode reward: [(0, '7.186')] |
|
[2023-08-30 09:49:23,389][08361] Updated weights for policy 0, policy_version 290 (0.0020) |
|
[2023-08-30 09:49:26,775][00929] Fps is (10 sec: 4095.9, 60 sec: 3140.3, 300 sec: 3068.5). Total num frames: 1200128. Throughput: 0: 782.1. Samples: 300506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:49:26,781][00929] Avg episode reward: [(0, '7.638')] |
|
[2023-08-30 09:49:31,776][00929] Fps is (10 sec: 3276.5, 60 sec: 3140.2, 300 sec: 3068.5). Total num frames: 1212416. Throughput: 0: 771.8. Samples: 302364. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:49:31,779][00929] Avg episode reward: [(0, '7.803')] |
|
[2023-08-30 09:49:31,794][08348] Saving new best policy, reward=7.803! |
|
[2023-08-30 09:49:36,775][00929] Fps is (10 sec: 2457.7, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1224704. Throughput: 0: 773.0. Samples: 306020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:49:36,787][00929] Avg episode reward: [(0, '7.804')] |
|
[2023-08-30 09:49:38,135][08361] Updated weights for policy 0, policy_version 300 (0.0018) |
|
[2023-08-30 09:49:41,775][00929] Fps is (10 sec: 2867.5, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1241088. Throughput: 0: 788.1. Samples: 310930. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:49:41,777][00929] Avg episode reward: [(0, '7.631')] |
|
[2023-08-30 09:49:46,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3004.0, 300 sec: 3026.9). Total num frames: 1253376. Throughput: 0: 772.0. Samples: 313078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:49:46,777][00929] Avg episode reward: [(0, '7.464')] |
|
[2023-08-30 09:49:51,780][00929] Fps is (10 sec: 2456.4, 60 sec: 3003.5, 300 sec: 3026.8). Total num frames: 1265664. Throughput: 0: 722.5. Samples: 316458. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:49:51,782][00929] Avg episode reward: [(0, '7.729')] |
|
[2023-08-30 09:49:53,357][08361] Updated weights for policy 0, policy_version 310 (0.0033) |
|
[2023-08-30 09:49:56,775][00929] Fps is (10 sec: 2048.0, 60 sec: 2935.6, 300 sec: 2999.1). Total num frames: 1273856. Throughput: 0: 708.3. Samples: 319464. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:49:56,779][00929] Avg episode reward: [(0, '7.967')] |
|
[2023-08-30 09:49:56,781][08348] Saving new best policy, reward=7.967! |
|
[2023-08-30 09:50:01,775][00929] Fps is (10 sec: 2049.0, 60 sec: 2798.9, 300 sec: 2985.2). Total num frames: 1286144. Throughput: 0: 705.6. Samples: 321164. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:50:01,781][00929] Avg episode reward: [(0, '8.076')] |
|
[2023-08-30 09:50:01,800][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000314_1286144.pth... |
|
[2023-08-30 09:50:01,932][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000140_573440.pth |
|
[2023-08-30 09:50:01,943][08348] Saving new best policy, reward=8.076! |
|
[2023-08-30 09:50:06,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2985.2). Total num frames: 1302528. Throughput: 0: 693.2. Samples: 325858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:50:06,778][00929] Avg episode reward: [(0, '8.435')] |
|
[2023-08-30 09:50:06,842][08348] Saving new best policy, reward=8.435! |
|
[2023-08-30 09:50:08,086][08361] Updated weights for policy 0, policy_version 320 (0.0026) |
|
[2023-08-30 09:50:11,775][00929] Fps is (10 sec: 3686.4, 60 sec: 2935.5, 300 sec: 3013.0). Total num frames: 1323008. Throughput: 0: 692.0. Samples: 331644. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:50:11,778][00929] Avg episode reward: [(0, '8.677')] |
|
[2023-08-30 09:50:11,787][08348] Saving new best policy, reward=8.677! |
|
[2023-08-30 09:50:16,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2999.1). Total num frames: 1335296. Throughput: 0: 694.8. Samples: 333630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:50:16,778][00929] Avg episode reward: [(0, '8.693')] |
|
[2023-08-30 09:50:16,786][08348] Saving new best policy, reward=8.693! |
|
[2023-08-30 09:50:21,775][00929] Fps is (10 sec: 2457.5, 60 sec: 2798.9, 300 sec: 2971.3). Total num frames: 1347584. Throughput: 0: 695.5. Samples: 337316. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:50:21,782][00929] Avg episode reward: [(0, '9.026')] |
|
[2023-08-30 09:50:21,793][08348] Saving new best policy, reward=9.026! |
|
[2023-08-30 09:50:22,323][08361] Updated weights for policy 0, policy_version 330 (0.0027) |
|
[2023-08-30 09:50:26,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2730.7, 300 sec: 2985.2). Total num frames: 1363968. Throughput: 0: 690.8. Samples: 342014. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:50:26,777][00929] Avg episode reward: [(0, '9.266')] |
|
[2023-08-30 09:50:26,780][08348] Saving new best policy, reward=9.266! |
|
[2023-08-30 09:50:31,775][00929] Fps is (10 sec: 3686.6, 60 sec: 2867.3, 300 sec: 3013.0). Total num frames: 1384448. Throughput: 0: 705.8. Samples: 344838. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:50:31,778][00929] Avg episode reward: [(0, '8.932')] |
|
[2023-08-30 09:50:33,633][08361] Updated weights for policy 0, policy_version 340 (0.0029) |
|
[2023-08-30 09:50:36,779][00929] Fps is (10 sec: 3275.6, 60 sec: 2867.0, 300 sec: 2999.1). Total num frames: 1396736. Throughput: 0: 746.4. Samples: 350046. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:50:36,781][00929] Avg episode reward: [(0, '9.199')] |
|
[2023-08-30 09:50:41,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2985.2). Total num frames: 1413120. Throughput: 0: 762.7. Samples: 353786. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:50:41,783][00929] Avg episode reward: [(0, '9.290')] |
|
[2023-08-30 09:50:41,802][08348] Saving new best policy, reward=9.290! |
|
[2023-08-30 09:50:46,775][00929] Fps is (10 sec: 2868.3, 60 sec: 2867.2, 300 sec: 2971.3). Total num frames: 1425408. Throughput: 0: 765.9. Samples: 355630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:50:46,777][00929] Avg episode reward: [(0, '9.280')] |
|
[2023-08-30 09:50:48,242][08361] Updated weights for policy 0, policy_version 350 (0.0018) |
|
[2023-08-30 09:50:51,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3004.0, 300 sec: 2999.1). Total num frames: 1445888. Throughput: 0: 785.2. Samples: 361190. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 09:50:51,781][00929] Avg episode reward: [(0, '9.792')] |
|
[2023-08-30 09:50:51,792][08348] Saving new best policy, reward=9.792! |
|
[2023-08-30 09:50:56,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3013.0). Total num frames: 1462272. Throughput: 0: 776.8. Samples: 366602. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
|
[2023-08-30 09:50:56,777][00929] Avg episode reward: [(0, '9.829')] |
|
[2023-08-30 09:50:56,779][08348] Saving new best policy, reward=9.829! |
|
[2023-08-30 09:51:01,047][08361] Updated weights for policy 0, policy_version 360 (0.0032) |
|
[2023-08-30 09:51:01,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3013.0). Total num frames: 1474560. Throughput: 0: 770.6. Samples: 368308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:51:01,778][00929] Avg episode reward: [(0, '9.912')] |
|
[2023-08-30 09:51:01,794][08348] Saving new best policy, reward=9.912! |
|
[2023-08-30 09:51:06,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 1486848. Throughput: 0: 768.9. Samples: 371916. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:51:06,777][00929] Avg episode reward: [(0, '10.419')] |
|
[2023-08-30 09:51:06,781][08348] Saving new best policy, reward=10.419! |
|
[2023-08-30 09:51:11,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 1507328. Throughput: 0: 778.4. Samples: 377042. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:51:11,781][00929] Avg episode reward: [(0, '10.882')] |
|
[2023-08-30 09:51:11,792][08348] Saving new best policy, reward=10.882! |
|
[2023-08-30 09:51:14,228][08361] Updated weights for policy 0, policy_version 370 (0.0025) |
|
[2023-08-30 09:51:16,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3068.5). Total num frames: 1523712. Throughput: 0: 775.0. Samples: 379712. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:51:16,784][00929] Avg episode reward: [(0, '11.214')] |
|
[2023-08-30 09:51:16,787][08348] Saving new best policy, reward=11.214! |
|
[2023-08-30 09:51:21,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3054.6). Total num frames: 1536000. Throughput: 0: 754.3. Samples: 383986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:51:21,781][00929] Avg episode reward: [(0, '11.094')] |
|
[2023-08-30 09:51:26,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 1548288. Throughput: 0: 754.6. Samples: 387742. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:51:26,778][00929] Avg episode reward: [(0, '10.685')] |
|
[2023-08-30 09:51:28,930][08361] Updated weights for policy 0, policy_version 380 (0.0033) |
|
[2023-08-30 09:51:31,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 1564672. Throughput: 0: 765.0. Samples: 390054. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:51:31,780][00929] Avg episode reward: [(0, '11.228')] |
|
[2023-08-30 09:51:31,792][08348] Saving new best policy, reward=11.228! |
|
[2023-08-30 09:51:36,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.5, 300 sec: 3054.7). Total num frames: 1585152. Throughput: 0: 770.9. Samples: 395880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:51:36,778][00929] Avg episode reward: [(0, '11.064')] |
|
[2023-08-30 09:51:40,344][08361] Updated weights for policy 0, policy_version 390 (0.0019) |
|
[2023-08-30 09:51:41,778][00929] Fps is (10 sec: 3275.9, 60 sec: 3071.9, 300 sec: 3054.6). Total num frames: 1597440. Throughput: 0: 755.6. Samples: 400606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:51:41,780][00929] Avg episode reward: [(0, '11.284')] |
|
[2023-08-30 09:51:41,798][08348] Saving new best policy, reward=11.284! |
|
[2023-08-30 09:51:46,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1609728. Throughput: 0: 757.2. Samples: 402380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:51:46,782][00929] Avg episode reward: [(0, '12.141')] |
|
[2023-08-30 09:51:46,871][08348] Saving new best policy, reward=12.141! |
|
[2023-08-30 09:51:51,775][00929] Fps is (10 sec: 2868.0, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 1626112. Throughput: 0: 767.5. Samples: 406454. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:51:51,778][00929] Avg episode reward: [(0, '10.944')] |
|
[2023-08-30 09:51:54,173][08361] Updated weights for policy 0, policy_version 400 (0.0019) |
|
[2023-08-30 09:51:56,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 3054.7). Total num frames: 1646592. Throughput: 0: 784.1. Samples: 412326. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:51:56,778][00929] Avg episode reward: [(0, '10.044')] |
|
[2023-08-30 09:52:01,781][00929] Fps is (10 sec: 3684.1, 60 sec: 3139.9, 300 sec: 3068.5). Total num frames: 1662976. Throughput: 0: 788.7. Samples: 415210. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:52:01,786][00929] Avg episode reward: [(0, '10.547')] |
|
[2023-08-30 09:52:01,800][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000406_1662976.pth... |
|
[2023-08-30 09:52:01,952][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000228_933888.pth |
|
[2023-08-30 09:52:06,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3140.2, 300 sec: 3054.6). Total num frames: 1675264. Throughput: 0: 773.5. Samples: 418792. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:52:06,777][00929] Avg episode reward: [(0, '9.722')] |
|
[2023-08-30 09:52:08,154][08361] Updated weights for policy 0, policy_version 410 (0.0022) |
|
[2023-08-30 09:52:11,776][00929] Fps is (10 sec: 2458.9, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 1687552. Throughput: 0: 779.3. Samples: 422810. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:52:11,780][00929] Avg episode reward: [(0, '10.935')] |
|
[2023-08-30 09:52:16,775][00929] Fps is (10 sec: 3277.0, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 1708032. Throughput: 0: 793.2. Samples: 425746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:52:16,778][00929] Avg episode reward: [(0, '11.525')] |
|
[2023-08-30 09:52:19,448][08361] Updated weights for policy 0, policy_version 420 (0.0030) |
|
[2023-08-30 09:52:21,780][00929] Fps is (10 sec: 3684.9, 60 sec: 3140.0, 300 sec: 3068.5). Total num frames: 1724416. Throughput: 0: 793.6. Samples: 431594. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:52:21,783][00929] Avg episode reward: [(0, '11.301')] |
|
[2023-08-30 09:52:26,778][00929] Fps is (10 sec: 2866.4, 60 sec: 3140.1, 300 sec: 3054.6). Total num frames: 1736704. Throughput: 0: 773.3. Samples: 435404. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:52:26,780][00929] Avg episode reward: [(0, '11.711')] |
|
[2023-08-30 09:52:31,775][00929] Fps is (10 sec: 2868.6, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 1753088. Throughput: 0: 774.1. Samples: 437216. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:52:31,779][00929] Avg episode reward: [(0, '11.721')] |
|
[2023-08-30 09:52:34,040][08361] Updated weights for policy 0, policy_version 430 (0.0025) |
|
[2023-08-30 09:52:36,775][00929] Fps is (10 sec: 3277.7, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1769472. Throughput: 0: 790.6. Samples: 442030. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:52:36,778][00929] Avg episode reward: [(0, '11.507')] |
|
[2023-08-30 09:52:41,776][00929] Fps is (10 sec: 3686.1, 60 sec: 3208.6, 300 sec: 3068.5). Total num frames: 1789952. Throughput: 0: 793.2. Samples: 448020. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:52:41,778][00929] Avg episode reward: [(0, '11.811')] |
|
[2023-08-30 09:52:45,677][08361] Updated weights for policy 0, policy_version 440 (0.0040) |
|
[2023-08-30 09:52:46,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 3068.5). Total num frames: 1802240. Throughput: 0: 777.8. Samples: 450206. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:52:46,778][00929] Avg episode reward: [(0, '11.621')] |
|
[2023-08-30 09:52:51,775][00929] Fps is (10 sec: 2457.8, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 1814528. Throughput: 0: 780.9. Samples: 453934. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:52:51,779][00929] Avg episode reward: [(0, '11.287')] |
|
[2023-08-30 09:52:56,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 1830912. Throughput: 0: 797.0. Samples: 458674. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:52:56,782][00929] Avg episode reward: [(0, '11.080')] |
|
[2023-08-30 09:52:59,126][08361] Updated weights for policy 0, policy_version 450 (0.0016) |
|
[2023-08-30 09:53:01,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.6, 300 sec: 3068.5). Total num frames: 1851392. Throughput: 0: 796.8. Samples: 461602. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:53:01,778][00929] Avg episode reward: [(0, '11.027')] |
|
[2023-08-30 09:53:06,779][00929] Fps is (10 sec: 3685.0, 60 sec: 3208.4, 300 sec: 3082.4). Total num frames: 1867776. Throughput: 0: 786.3. Samples: 466978. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:53:06,783][00929] Avg episode reward: [(0, '12.275')] |
|
[2023-08-30 09:53:06,797][08348] Saving new best policy, reward=12.275! |
|
[2023-08-30 09:53:11,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3208.6, 300 sec: 3054.6). Total num frames: 1880064. Throughput: 0: 783.0. Samples: 470636. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:53:11,783][00929] Avg episode reward: [(0, '11.967')] |
|
[2023-08-30 09:53:12,760][08361] Updated weights for policy 0, policy_version 460 (0.0029) |
|
[2023-08-30 09:53:16,775][00929] Fps is (10 sec: 2868.2, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 1896448. Throughput: 0: 784.5. Samples: 472520. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:53:16,778][00929] Avg episode reward: [(0, '13.123')] |
|
[2023-08-30 09:53:16,786][08348] Saving new best policy, reward=13.123! |
|
[2023-08-30 09:53:21,775][00929] Fps is (10 sec: 3276.9, 60 sec: 3140.5, 300 sec: 3054.6). Total num frames: 1912832. Throughput: 0: 801.1. Samples: 478078. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:53:21,783][00929] Avg episode reward: [(0, '14.360')] |
|
[2023-08-30 09:53:21,848][08348] Saving new best policy, reward=14.360! |
|
[2023-08-30 09:53:23,946][08361] Updated weights for policy 0, policy_version 470 (0.0032) |
|
[2023-08-30 09:53:26,778][00929] Fps is (10 sec: 3276.0, 60 sec: 3208.5, 300 sec: 3068.5). Total num frames: 1929216. Throughput: 0: 785.6. Samples: 483372. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:53:26,780][00929] Avg episode reward: [(0, '14.991')] |
|
[2023-08-30 09:53:26,797][08348] Saving new best policy, reward=14.991! |
|
[2023-08-30 09:53:31,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3054.6). Total num frames: 1941504. Throughput: 0: 768.1. Samples: 484770. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:53:31,786][00929] Avg episode reward: [(0, '14.744')] |
|
[2023-08-30 09:53:36,775][00929] Fps is (10 sec: 2048.5, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 1949696. Throughput: 0: 750.7. Samples: 487714. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:53:36,782][00929] Avg episode reward: [(0, '14.547')] |
|
[2023-08-30 09:53:41,775][00929] Fps is (10 sec: 2048.0, 60 sec: 2867.2, 300 sec: 3013.0). Total num frames: 1961984. Throughput: 0: 716.7. Samples: 490924. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:53:41,778][00929] Avg episode reward: [(0, '15.176')] |
|
[2023-08-30 09:53:41,789][08348] Saving new best policy, reward=15.176! |
|
[2023-08-30 09:53:43,400][08361] Updated weights for policy 0, policy_version 480 (0.0046) |
|
[2023-08-30 09:53:46,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2935.5, 300 sec: 3026.9). Total num frames: 1978368. Throughput: 0: 700.2. Samples: 493112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:53:46,777][00929] Avg episode reward: [(0, '15.435')] |
|
[2023-08-30 09:53:46,782][08348] Saving new best policy, reward=15.435! |
|
[2023-08-30 09:53:51,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 3054.7). Total num frames: 1998848. Throughput: 0: 712.0. Samples: 499014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:53:51,777][00929] Avg episode reward: [(0, '16.284')] |
|
[2023-08-30 09:53:51,786][08348] Saving new best policy, reward=16.284! |
|
[2023-08-30 09:53:54,461][08361] Updated weights for policy 0, policy_version 490 (0.0014) |
|
[2023-08-30 09:53:56,780][00929] Fps is (10 sec: 3275.2, 60 sec: 3003.5, 300 sec: 3026.8). Total num frames: 2011136. Throughput: 0: 731.7. Samples: 503568. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:53:56,782][00929] Avg episode reward: [(0, '15.570')] |
|
[2023-08-30 09:54:01,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 3013.0). Total num frames: 2023424. Throughput: 0: 731.5. Samples: 505436. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:54:01,779][00929] Avg episode reward: [(0, '16.641')] |
|
[2023-08-30 09:54:01,796][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000494_2023424.pth... |
|
[2023-08-30 09:54:01,959][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000314_1286144.pth |
|
[2023-08-30 09:54:01,968][08348] Saving new best policy, reward=16.641! |
|
[2023-08-30 09:54:06,775][00929] Fps is (10 sec: 2868.6, 60 sec: 2867.4, 300 sec: 3026.9). Total num frames: 2039808. Throughput: 0: 694.0. Samples: 509308. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:54:06,777][00929] Avg episode reward: [(0, '17.165')] |
|
[2023-08-30 09:54:06,786][08348] Saving new best policy, reward=17.165! |
|
[2023-08-30 09:54:08,826][08361] Updated weights for policy 0, policy_version 500 (0.0023) |
|
[2023-08-30 09:54:11,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 3054.6). Total num frames: 2060288. Throughput: 0: 704.8. Samples: 515088. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:54:11,777][00929] Avg episode reward: [(0, '17.118')] |
|
[2023-08-30 09:54:16,775][00929] Fps is (10 sec: 3686.3, 60 sec: 3003.7, 300 sec: 3040.8). Total num frames: 2076672. Throughput: 0: 738.4. Samples: 517996. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:54:16,783][00929] Avg episode reward: [(0, '16.878')] |
|
[2023-08-30 09:54:21,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2999.1). Total num frames: 2084864. Throughput: 0: 754.4. Samples: 521664. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:54:21,789][00929] Avg episode reward: [(0, '16.741')] |
|
[2023-08-30 09:54:22,374][08361] Updated weights for policy 0, policy_version 510 (0.0016) |
|
[2023-08-30 09:54:26,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2867.3, 300 sec: 3013.0). Total num frames: 2101248. Throughput: 0: 769.2. Samples: 525540. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:54:26,783][00929] Avg episode reward: [(0, '16.425')] |
|
[2023-08-30 09:54:31,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 3026.9). Total num frames: 2117632. Throughput: 0: 784.7. Samples: 528424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:54:31,782][00929] Avg episode reward: [(0, '15.048')] |
|
[2023-08-30 09:54:34,304][08361] Updated weights for policy 0, policy_version 520 (0.0025) |
|
[2023-08-30 09:54:36,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 2138112. Throughput: 0: 781.8. Samples: 534194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:54:36,780][00929] Avg episode reward: [(0, '15.392')] |
|
[2023-08-30 09:54:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3140.3, 300 sec: 3040.8). Total num frames: 2150400. Throughput: 0: 766.2. Samples: 538044. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 09:54:41,779][00929] Avg episode reward: [(0, '15.161')] |
|
[2023-08-30 09:54:46,775][00929] Fps is (10 sec: 2048.0, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 2158592. Throughput: 0: 764.6. Samples: 539844. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:54:46,777][00929] Avg episode reward: [(0, '15.621')] |
|
[2023-08-30 09:54:49,146][08361] Updated weights for policy 0, policy_version 530 (0.0022) |
|
[2023-08-30 09:54:51,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3003.7, 300 sec: 3068.5). Total num frames: 2179072. Throughput: 0: 782.2. Samples: 544506. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 09:54:51,780][00929] Avg episode reward: [(0, '16.419')] |
|
[2023-08-30 09:54:56,775][00929] Fps is (10 sec: 4096.0, 60 sec: 3140.5, 300 sec: 3096.3). Total num frames: 2199552. Throughput: 0: 783.8. Samples: 550360. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:54:56,779][00929] Avg episode reward: [(0, '18.459')] |
|
[2023-08-30 09:54:56,787][08348] Saving new best policy, reward=18.459! |
|
[2023-08-30 09:55:01,265][08361] Updated weights for policy 0, policy_version 540 (0.0027) |
|
[2023-08-30 09:55:01,780][00929] Fps is (10 sec: 3275.1, 60 sec: 3140.0, 300 sec: 3082.4). Total num frames: 2211840. Throughput: 0: 767.2. Samples: 552522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:55:01,783][00929] Avg episode reward: [(0, '18.131')] |
|
[2023-08-30 09:55:06,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 2224128. Throughput: 0: 765.8. Samples: 556126. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:55:06,778][00929] Avg episode reward: [(0, '17.790')] |
|
[2023-08-30 09:55:11,775][00929] Fps is (10 sec: 2868.7, 60 sec: 3003.7, 300 sec: 3068.5). Total num frames: 2240512. Throughput: 0: 780.1. Samples: 560646. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2023-08-30 09:55:11,782][00929] Avg episode reward: [(0, '19.235')] |
|
[2023-08-30 09:55:11,797][08348] Saving new best policy, reward=19.235! |
|
[2023-08-30 09:55:14,565][08361] Updated weights for policy 0, policy_version 550 (0.0022) |
|
[2023-08-30 09:55:16,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3003.7, 300 sec: 3082.4). Total num frames: 2256896. Throughput: 0: 780.3. Samples: 563536. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:55:16,780][00929] Avg episode reward: [(0, '19.379')] |
|
[2023-08-30 09:55:16,787][08348] Saving new best policy, reward=19.379! |
|
[2023-08-30 09:55:21,777][00929] Fps is (10 sec: 3276.1, 60 sec: 3140.1, 300 sec: 3082.4). Total num frames: 2273280. Throughput: 0: 770.1. Samples: 568852. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:55:21,780][00929] Avg episode reward: [(0, '19.895')] |
|
[2023-08-30 09:55:21,797][08348] Saving new best policy, reward=19.895! |
|
[2023-08-30 09:55:26,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 2285568. Throughput: 0: 765.6. Samples: 572496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:55:26,784][00929] Avg episode reward: [(0, '19.231')] |
|
[2023-08-30 09:55:28,716][08361] Updated weights for policy 0, policy_version 560 (0.0031) |
|
[2023-08-30 09:55:31,775][00929] Fps is (10 sec: 2867.9, 60 sec: 3072.0, 300 sec: 3068.6). Total num frames: 2301952. Throughput: 0: 768.3. Samples: 574416. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:55:31,783][00929] Avg episode reward: [(0, '19.187')] |
|
[2023-08-30 09:55:36,777][00929] Fps is (10 sec: 3276.2, 60 sec: 3003.6, 300 sec: 3068.5). Total num frames: 2318336. Throughput: 0: 781.2. Samples: 579662. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:55:36,779][00929] Avg episode reward: [(0, '20.292')] |
|
[2023-08-30 09:55:36,873][08348] Saving new best policy, reward=20.292! |
|
[2023-08-30 09:55:40,163][08361] Updated weights for policy 0, policy_version 570 (0.0030) |
|
[2023-08-30 09:55:41,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3096.3). Total num frames: 2338816. Throughput: 0: 775.3. Samples: 585250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:55:41,782][00929] Avg episode reward: [(0, '21.940')] |
|
[2023-08-30 09:55:41,796][08348] Saving new best policy, reward=21.940! |
|
[2023-08-30 09:55:46,780][00929] Fps is (10 sec: 3275.8, 60 sec: 3208.3, 300 sec: 3068.5). Total num frames: 2351104. Throughput: 0: 767.9. Samples: 587078. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:55:46,791][00929] Avg episode reward: [(0, '22.504')] |
|
[2023-08-30 09:55:46,793][08348] Saving new best policy, reward=22.504! |
|
[2023-08-30 09:55:51,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 2363392. Throughput: 0: 767.1. Samples: 590646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:55:51,782][00929] Avg episode reward: [(0, '22.943')] |
|
[2023-08-30 09:55:51,795][08348] Saving new best policy, reward=22.943! |
|
[2023-08-30 09:55:55,122][08361] Updated weights for policy 0, policy_version 580 (0.0018) |
|
[2023-08-30 09:55:56,775][00929] Fps is (10 sec: 2868.6, 60 sec: 3003.7, 300 sec: 3068.5). Total num frames: 2379776. Throughput: 0: 778.0. Samples: 595654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:55:56,781][00929] Avg episode reward: [(0, '22.491')] |
|
[2023-08-30 09:56:01,775][00929] Fps is (10 sec: 3686.3, 60 sec: 3140.5, 300 sec: 3096.3). Total num frames: 2400256. Throughput: 0: 777.8. Samples: 598536. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:56:01,777][00929] Avg episode reward: [(0, '22.919')] |
|
[2023-08-30 09:56:01,791][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000586_2400256.pth... |
|
[2023-08-30 09:56:01,918][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000406_1662976.pth |
|
[2023-08-30 09:56:06,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3140.3, 300 sec: 3068.5). Total num frames: 2412544. Throughput: 0: 765.1. Samples: 603278. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:56:06,780][00929] Avg episode reward: [(0, '23.132')] |
|
[2023-08-30 09:56:06,783][08348] Saving new best policy, reward=23.132! |
|
[2023-08-30 09:56:07,737][08361] Updated weights for policy 0, policy_version 590 (0.0027) |
|
[2023-08-30 09:56:11,777][00929] Fps is (10 sec: 2457.2, 60 sec: 3071.9, 300 sec: 3054.6). Total num frames: 2424832. Throughput: 0: 767.0. Samples: 607012. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:56:11,788][00929] Avg episode reward: [(0, '21.716')] |
|
[2023-08-30 09:56:16,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 3068.5). Total num frames: 2441216. Throughput: 0: 768.8. Samples: 609010. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:56:16,778][00929] Avg episode reward: [(0, '21.120')] |
|
[2023-08-30 09:56:20,248][08361] Updated weights for policy 0, policy_version 600 (0.0018) |
|
[2023-08-30 09:56:21,775][00929] Fps is (10 sec: 3687.0, 60 sec: 3140.4, 300 sec: 3096.3). Total num frames: 2461696. Throughput: 0: 784.5. Samples: 614962. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:56:21,782][00929] Avg episode reward: [(0, '21.521')] |
|
[2023-08-30 09:56:26,775][00929] Fps is (10 sec: 3686.3, 60 sec: 3208.5, 300 sec: 3096.3). Total num frames: 2478080. Throughput: 0: 773.1. Samples: 620038. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:56:26,785][00929] Avg episode reward: [(0, '22.187')] |
|
[2023-08-30 09:56:31,775][00929] Fps is (10 sec: 2867.3, 60 sec: 3140.3, 300 sec: 3068.5). Total num frames: 2490368. Throughput: 0: 775.2. Samples: 621958. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:56:31,782][00929] Avg episode reward: [(0, '21.733')] |
|
[2023-08-30 09:56:34,597][08361] Updated weights for policy 0, policy_version 610 (0.0021) |
|
[2023-08-30 09:56:36,775][00929] Fps is (10 sec: 2457.7, 60 sec: 3072.1, 300 sec: 3068.6). Total num frames: 2502656. Throughput: 0: 779.4. Samples: 625720. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:56:36,783][00929] Avg episode reward: [(0, '21.048')] |
|
[2023-08-30 09:56:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3096.3). Total num frames: 2523136. Throughput: 0: 795.1. Samples: 631432. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 09:56:41,777][00929] Avg episode reward: [(0, '20.739')] |
|
[2023-08-30 09:56:45,528][08361] Updated weights for policy 0, policy_version 620 (0.0026) |
|
[2023-08-30 09:56:46,781][00929] Fps is (10 sec: 3684.2, 60 sec: 3140.2, 300 sec: 3096.2). Total num frames: 2539520. Throughput: 0: 796.5. Samples: 634382. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:56:46,784][00929] Avg episode reward: [(0, '20.753')] |
|
[2023-08-30 09:56:51,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3140.2, 300 sec: 3068.5). Total num frames: 2551808. Throughput: 0: 785.8. Samples: 638640. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:56:51,781][00929] Avg episode reward: [(0, '21.582')] |
|
[2023-08-30 09:56:56,775][00929] Fps is (10 sec: 2459.0, 60 sec: 3072.0, 300 sec: 3054.7). Total num frames: 2564096. Throughput: 0: 787.2. Samples: 642434. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:56:56,780][00929] Avg episode reward: [(0, '20.788')] |
|
[2023-08-30 09:57:00,319][08361] Updated weights for policy 0, policy_version 630 (0.0052) |
|
[2023-08-30 09:57:01,775][00929] Fps is (10 sec: 3277.1, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 2584576. Throughput: 0: 797.6. Samples: 644904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:57:01,777][00929] Avg episode reward: [(0, '21.233')] |
|
[2023-08-30 09:57:06,775][00929] Fps is (10 sec: 4096.1, 60 sec: 3208.5, 300 sec: 3110.2). Total num frames: 2605056. Throughput: 0: 792.3. Samples: 650616. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:57:06,784][00929] Avg episode reward: [(0, '22.984')] |
|
[2023-08-30 09:57:11,781][00929] Fps is (10 sec: 2865.5, 60 sec: 3140.1, 300 sec: 3068.5). Total num frames: 2613248. Throughput: 0: 765.9. Samples: 654506. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:57:11,784][00929] Avg episode reward: [(0, '22.997')] |
|
[2023-08-30 09:57:14,646][08361] Updated weights for policy 0, policy_version 640 (0.0023) |
|
[2023-08-30 09:57:16,777][00929] Fps is (10 sec: 2047.6, 60 sec: 3071.9, 300 sec: 3054.7). Total num frames: 2625536. Throughput: 0: 754.6. Samples: 655918. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 09:57:16,780][00929] Avg episode reward: [(0, '23.528')] |
|
[2023-08-30 09:57:16,791][08348] Saving new best policy, reward=23.528! |
|
[2023-08-30 09:57:21,777][00929] Fps is (10 sec: 2048.8, 60 sec: 2867.1, 300 sec: 3040.8). Total num frames: 2633728. Throughput: 0: 734.1. Samples: 658758. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:57:21,779][00929] Avg episode reward: [(0, '23.801')] |
|
[2023-08-30 09:57:21,798][08348] Saving new best policy, reward=23.801! |
|
[2023-08-30 09:57:26,775][00929] Fps is (10 sec: 2048.4, 60 sec: 2798.9, 300 sec: 3026.9). Total num frames: 2646016. Throughput: 0: 686.9. Samples: 662342. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:57:26,777][00929] Avg episode reward: [(0, '22.149')] |
|
[2023-08-30 09:57:30,223][08361] Updated weights for policy 0, policy_version 650 (0.0052) |
|
[2023-08-30 09:57:31,775][00929] Fps is (10 sec: 3277.4, 60 sec: 2935.5, 300 sec: 3040.8). Total num frames: 2666496. Throughput: 0: 687.2. Samples: 665300. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 09:57:31,777][00929] Avg episode reward: [(0, '20.917')] |
|
[2023-08-30 09:57:36,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3003.7, 300 sec: 3026.9). Total num frames: 2682880. Throughput: 0: 720.3. Samples: 671052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:57:36,779][00929] Avg episode reward: [(0, '19.428')] |
|
[2023-08-30 09:57:41,775][00929] Fps is (10 sec: 2867.1, 60 sec: 2867.2, 300 sec: 3026.9). Total num frames: 2695168. Throughput: 0: 719.6. Samples: 674816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:57:41,781][00929] Avg episode reward: [(0, '19.113')] |
|
[2023-08-30 09:57:43,767][08361] Updated weights for policy 0, policy_version 660 (0.0021) |
|
[2023-08-30 09:57:46,779][00929] Fps is (10 sec: 2456.7, 60 sec: 2799.0, 300 sec: 3026.8). Total num frames: 2707456. Throughput: 0: 706.2. Samples: 676684. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:57:46,781][00929] Avg episode reward: [(0, '18.422')] |
|
[2023-08-30 09:57:51,775][00929] Fps is (10 sec: 3276.9, 60 sec: 2935.5, 300 sec: 3040.8). Total num frames: 2727936. Throughput: 0: 688.8. Samples: 681614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:57:51,778][00929] Avg episode reward: [(0, '17.458')] |
|
[2023-08-30 09:57:55,425][08361] Updated weights for policy 0, policy_version 670 (0.0040) |
|
[2023-08-30 09:57:56,775][00929] Fps is (10 sec: 4097.5, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 2748416. Throughput: 0: 735.7. Samples: 687610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:57:56,782][00929] Avg episode reward: [(0, '18.022')] |
|
[2023-08-30 09:58:01,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 3026.9). Total num frames: 2760704. Throughput: 0: 750.1. Samples: 689672. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:58:01,778][00929] Avg episode reward: [(0, '19.078')] |
|
[2023-08-30 09:58:01,792][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000674_2760704.pth... |
|
[2023-08-30 09:58:01,940][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000494_2023424.pth |
|
[2023-08-30 09:58:06,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 3026.9). Total num frames: 2772992. Throughput: 0: 768.3. Samples: 693328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:58:06,780][00929] Avg episode reward: [(0, '19.386')] |
|
[2023-08-30 09:58:10,227][08361] Updated weights for policy 0, policy_version 680 (0.0028) |
|
[2023-08-30 09:58:11,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2935.8, 300 sec: 3026.9). Total num frames: 2789376. Throughput: 0: 789.2. Samples: 697856. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:58:11,782][00929] Avg episode reward: [(0, '20.637')] |
|
[2023-08-30 09:58:16,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3072.1, 300 sec: 3040.8). Total num frames: 2809856. Throughput: 0: 788.7. Samples: 700792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:58:16,783][00929] Avg episode reward: [(0, '20.559')] |
|
[2023-08-30 09:58:21,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3208.6, 300 sec: 3040.8). Total num frames: 2826240. Throughput: 0: 779.2. Samples: 706114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:58:21,782][00929] Avg episode reward: [(0, '20.081')] |
|
[2023-08-30 09:58:21,798][08361] Updated weights for policy 0, policy_version 690 (0.0019) |
|
[2023-08-30 09:58:26,776][00929] Fps is (10 sec: 2457.4, 60 sec: 3140.2, 300 sec: 3026.9). Total num frames: 2834432. Throughput: 0: 779.9. Samples: 709912. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:58:26,778][00929] Avg episode reward: [(0, '19.842')] |
|
[2023-08-30 09:58:31,775][00929] Fps is (10 sec: 2457.5, 60 sec: 3072.0, 300 sec: 3054.6). Total num frames: 2850816. Throughput: 0: 778.6. Samples: 711720. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:58:31,781][00929] Avg episode reward: [(0, '20.079')] |
|
[2023-08-30 09:58:35,544][08361] Updated weights for policy 0, policy_version 700 (0.0021) |
|
[2023-08-30 09:58:36,775][00929] Fps is (10 sec: 3686.7, 60 sec: 3140.3, 300 sec: 3082.4). Total num frames: 2871296. Throughput: 0: 789.7. Samples: 717152. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:58:36,778][00929] Avg episode reward: [(0, '20.859')] |
|
[2023-08-30 09:58:41,775][00929] Fps is (10 sec: 3686.5, 60 sec: 3208.5, 300 sec: 3082.4). Total num frames: 2887680. Throughput: 0: 781.0. Samples: 722754. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:58:41,779][00929] Avg episode reward: [(0, '20.212')] |
|
[2023-08-30 09:58:46,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3208.7, 300 sec: 3054.6). Total num frames: 2899968. Throughput: 0: 775.8. Samples: 724584. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:58:46,780][00929] Avg episode reward: [(0, '21.956')] |
|
[2023-08-30 09:58:49,260][08361] Updated weights for policy 0, policy_version 710 (0.0018) |
|
[2023-08-30 09:58:51,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3054.7). Total num frames: 2912256. Throughput: 0: 777.7. Samples: 728326. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:58:51,781][00929] Avg episode reward: [(0, '22.191')] |
|
[2023-08-30 09:58:56,775][00929] Fps is (10 sec: 3276.7, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 2932736. Throughput: 0: 793.8. Samples: 733576. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:58:56,784][00929] Avg episode reward: [(0, '23.650')] |
|
[2023-08-30 09:59:00,861][08361] Updated weights for policy 0, policy_version 720 (0.0016) |
|
[2023-08-30 09:59:01,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 3082.4). Total num frames: 2949120. Throughput: 0: 792.7. Samples: 736462. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:59:01,785][00929] Avg episode reward: [(0, '24.201')] |
|
[2023-08-30 09:59:01,859][08348] Saving new best policy, reward=24.201! |
|
[2023-08-30 09:59:06,775][00929] Fps is (10 sec: 3276.9, 60 sec: 3208.5, 300 sec: 3068.5). Total num frames: 2965504. Throughput: 0: 779.4. Samples: 741186. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:59:06,780][00929] Avg episode reward: [(0, '22.902')] |
|
[2023-08-30 09:59:11,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3140.3, 300 sec: 3054.6). Total num frames: 2977792. Throughput: 0: 777.6. Samples: 744904. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 09:59:11,781][00929] Avg episode reward: [(0, '22.849')] |
|
[2023-08-30 09:59:15,435][08361] Updated weights for policy 0, policy_version 730 (0.0039) |
|
[2023-08-30 09:59:16,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 2994176. Throughput: 0: 783.0. Samples: 746956. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 09:59:16,779][00929] Avg episode reward: [(0, '22.449')] |
|
[2023-08-30 09:59:21,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 3010560. Throughput: 0: 792.0. Samples: 752794. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:59:21,783][00929] Avg episode reward: [(0, '20.782')] |
|
[2023-08-30 09:59:26,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3208.6, 300 sec: 3082.4). Total num frames: 3026944. Throughput: 0: 775.6. Samples: 757654. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:59:26,777][00929] Avg episode reward: [(0, '20.663')] |
|
[2023-08-30 09:59:27,528][08361] Updated weights for policy 0, policy_version 740 (0.0018) |
|
[2023-08-30 09:59:31,776][00929] Fps is (10 sec: 2867.0, 60 sec: 3140.2, 300 sec: 3054.6). Total num frames: 3039232. Throughput: 0: 775.0. Samples: 759460. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:59:31,783][00929] Avg episode reward: [(0, '20.744')] |
|
[2023-08-30 09:59:36,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3003.7, 300 sec: 3054.6). Total num frames: 3051520. Throughput: 0: 773.7. Samples: 763144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:59:36,777][00929] Avg episode reward: [(0, '20.729')] |
|
[2023-08-30 09:59:41,022][08361] Updated weights for policy 0, policy_version 750 (0.0031) |
|
[2023-08-30 09:59:41,775][00929] Fps is (10 sec: 3277.1, 60 sec: 3072.0, 300 sec: 3096.3). Total num frames: 3072000. Throughput: 0: 785.6. Samples: 768926. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 09:59:41,780][00929] Avg episode reward: [(0, '20.627')] |
|
[2023-08-30 09:59:46,775][00929] Fps is (10 sec: 4096.0, 60 sec: 3208.5, 300 sec: 3096.3). Total num frames: 3092480. Throughput: 0: 785.6. Samples: 771814. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 09:59:46,781][00929] Avg episode reward: [(0, '20.361')] |
|
[2023-08-30 09:59:51,781][00929] Fps is (10 sec: 3274.8, 60 sec: 3208.2, 300 sec: 3068.5). Total num frames: 3104768. Throughput: 0: 773.4. Samples: 775992. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 09:59:51,786][00929] Avg episode reward: [(0, '20.170')] |
|
[2023-08-30 09:59:54,918][08361] Updated weights for policy 0, policy_version 760 (0.0018) |
|
[2023-08-30 09:59:56,779][00929] Fps is (10 sec: 2456.6, 60 sec: 3071.8, 300 sec: 3068.5). Total num frames: 3117056. Throughput: 0: 775.4. Samples: 779798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 09:59:56,781][00929] Avg episode reward: [(0, '19.749')] |
|
[2023-08-30 10:00:01,775][00929] Fps is (10 sec: 2868.9, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 3133440. Throughput: 0: 785.9. Samples: 782320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:00:01,782][00929] Avg episode reward: [(0, '20.542')] |
|
[2023-08-30 10:00:01,795][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth... |
|
[2023-08-30 10:00:01,962][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000586_2400256.pth |
|
[2023-08-30 10:00:06,494][08361] Updated weights for policy 0, policy_version 770 (0.0019) |
|
[2023-08-30 10:00:06,775][00929] Fps is (10 sec: 3687.8, 60 sec: 3140.3, 300 sec: 3096.3). Total num frames: 3153920. Throughput: 0: 781.6. Samples: 787964. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:00:06,777][00929] Avg episode reward: [(0, '21.079')] |
|
[2023-08-30 10:00:11,776][00929] Fps is (10 sec: 3276.6, 60 sec: 3140.2, 300 sec: 3082.4). Total num frames: 3166208. Throughput: 0: 769.6. Samples: 792288. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:00:11,778][00929] Avg episode reward: [(0, '21.720')] |
|
[2023-08-30 10:00:16,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3068.5). Total num frames: 3178496. Throughput: 0: 769.4. Samples: 794082. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:00:16,783][00929] Avg episode reward: [(0, '22.941')] |
|
[2023-08-30 10:00:21,682][08361] Updated weights for policy 0, policy_version 780 (0.0047) |
|
[2023-08-30 10:00:21,775][00929] Fps is (10 sec: 2867.4, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 3194880. Throughput: 0: 776.0. Samples: 798066. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:00:21,778][00929] Avg episode reward: [(0, '23.070')] |
|
[2023-08-30 10:00:26,775][00929] Fps is (10 sec: 3276.9, 60 sec: 3072.0, 300 sec: 3082.4). Total num frames: 3211264. Throughput: 0: 767.9. Samples: 803480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:00:26,782][00929] Avg episode reward: [(0, '23.684')] |
|
[2023-08-30 10:00:31,776][00929] Fps is (10 sec: 3276.6, 60 sec: 3140.3, 300 sec: 3082.4). Total num frames: 3227648. Throughput: 0: 760.8. Samples: 806052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:00:31,783][00929] Avg episode reward: [(0, '24.276')] |
|
[2023-08-30 10:00:31,795][08348] Saving new best policy, reward=24.276! |
|
[2023-08-30 10:00:35,237][08361] Updated weights for policy 0, policy_version 790 (0.0018) |
|
[2023-08-30 10:00:36,775][00929] Fps is (10 sec: 2457.6, 60 sec: 3072.0, 300 sec: 3040.8). Total num frames: 3235840. Throughput: 0: 746.6. Samples: 809586. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 10:00:36,778][00929] Avg episode reward: [(0, '23.867')] |
|
[2023-08-30 10:00:41,775][00929] Fps is (10 sec: 2457.8, 60 sec: 3003.7, 300 sec: 3054.7). Total num frames: 3252224. Throughput: 0: 744.9. Samples: 813316. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:00:41,778][00929] Avg episode reward: [(0, '23.237')] |
|
[2023-08-30 10:00:46,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 3068.5). Total num frames: 3268608. Throughput: 0: 748.8. Samples: 816016. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:00:46,778][00929] Avg episode reward: [(0, '22.512')] |
|
[2023-08-30 10:00:48,642][08361] Updated weights for policy 0, policy_version 800 (0.0021) |
|
[2023-08-30 10:00:51,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3004.0, 300 sec: 3068.5). Total num frames: 3284992. Throughput: 0: 738.5. Samples: 821198. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:00:51,782][00929] Avg episode reward: [(0, '21.082')] |
|
[2023-08-30 10:00:56,779][00929] Fps is (10 sec: 2456.6, 60 sec: 2935.5, 300 sec: 3026.8). Total num frames: 3293184. Throughput: 0: 702.9. Samples: 823922. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:00:56,782][00929] Avg episode reward: [(0, '21.854')] |
|
[2023-08-30 10:01:01,775][00929] Fps is (10 sec: 1638.4, 60 sec: 2798.9, 300 sec: 3013.0). Total num frames: 3301376. Throughput: 0: 691.7. Samples: 825210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:01:01,777][00929] Avg episode reward: [(0, '21.598')] |
|
[2023-08-30 10:01:06,775][00929] Fps is (10 sec: 1639.0, 60 sec: 2594.1, 300 sec: 2999.1). Total num frames: 3309568. Throughput: 0: 664.3. Samples: 827958. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:01:06,778][00929] Avg episode reward: [(0, '21.764')] |
|
[2023-08-30 10:01:09,096][08361] Updated weights for policy 0, policy_version 810 (0.0038) |
|
[2023-08-30 10:01:11,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2999.1). Total num frames: 3325952. Throughput: 0: 634.9. Samples: 832052. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:01:11,782][00929] Avg episode reward: [(0, '23.210')] |
|
[2023-08-30 10:01:16,775][00929] Fps is (10 sec: 3276.9, 60 sec: 2730.7, 300 sec: 2985.2). Total num frames: 3342336. Throughput: 0: 636.4. Samples: 834690. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:01:16,777][00929] Avg episode reward: [(0, '22.533')] |
|
[2023-08-30 10:01:21,769][08361] Updated weights for policy 0, policy_version 820 (0.0032) |
|
[2023-08-30 10:01:21,777][00929] Fps is (10 sec: 3276.2, 60 sec: 2730.6, 300 sec: 2985.2). Total num frames: 3358720. Throughput: 0: 664.6. Samples: 839494. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:01:21,781][00929] Avg episode reward: [(0, '22.746')] |
|
[2023-08-30 10:01:26,781][00929] Fps is (10 sec: 2456.2, 60 sec: 2593.9, 300 sec: 2971.3). Total num frames: 3366912. Throughput: 0: 658.0. Samples: 842928. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:01:26,793][00929] Avg episode reward: [(0, '23.046')] |
|
[2023-08-30 10:01:31,775][00929] Fps is (10 sec: 2458.1, 60 sec: 2594.2, 300 sec: 2985.2). Total num frames: 3383296. Throughput: 0: 638.4. Samples: 844746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:01:31,777][00929] Avg episode reward: [(0, '22.930')] |
|
[2023-08-30 10:01:35,800][08361] Updated weights for policy 0, policy_version 830 (0.0043) |
|
[2023-08-30 10:01:36,775][00929] Fps is (10 sec: 3278.7, 60 sec: 2730.7, 300 sec: 2971.3). Total num frames: 3399680. Throughput: 0: 641.4. Samples: 850060. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:01:36,783][00929] Avg episode reward: [(0, '23.925')] |
|
[2023-08-30 10:01:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2971.4). Total num frames: 3416064. Throughput: 0: 701.6. Samples: 855490. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 10:01:41,777][00929] Avg episode reward: [(0, '23.810')] |
|
[2023-08-30 10:01:46,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2730.7, 300 sec: 2985.2). Total num frames: 3432448. Throughput: 0: 713.2. Samples: 857306. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:01:46,780][00929] Avg episode reward: [(0, '23.476')] |
|
[2023-08-30 10:01:49,859][08361] Updated weights for policy 0, policy_version 840 (0.0026) |
|
[2023-08-30 10:01:51,775][00929] Fps is (10 sec: 2457.6, 60 sec: 2594.1, 300 sec: 2971.3). Total num frames: 3440640. Throughput: 0: 732.3. Samples: 860910. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:01:51,780][00929] Avg episode reward: [(0, '23.864')] |
|
[2023-08-30 10:01:56,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2799.1, 300 sec: 2971.3). Total num frames: 3461120. Throughput: 0: 756.2. Samples: 866080. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:01:56,777][00929] Avg episode reward: [(0, '26.185')] |
|
[2023-08-30 10:01:56,779][08348] Saving new best policy, reward=26.185! |
|
[2023-08-30 10:02:01,572][08361] Updated weights for policy 0, policy_version 850 (0.0015) |
|
[2023-08-30 10:02:01,775][00929] Fps is (10 sec: 4096.0, 60 sec: 3003.7, 300 sec: 2971.3). Total num frames: 3481600. Throughput: 0: 760.6. Samples: 868918. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:02:01,777][00929] Avg episode reward: [(0, '25.201')] |
|
[2023-08-30 10:02:01,789][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000850_3481600.pth... |
|
[2023-08-30 10:02:01,983][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000674_2760704.pth |
|
[2023-08-30 10:02:06,778][00929] Fps is (10 sec: 2866.4, 60 sec: 3003.6, 300 sec: 2971.4). Total num frames: 3489792. Throughput: 0: 749.5. Samples: 873224. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:02:06,783][00929] Avg episode reward: [(0, '25.638')] |
|
[2023-08-30 10:02:11,780][00929] Fps is (10 sec: 2456.3, 60 sec: 3003.5, 300 sec: 2985.2). Total num frames: 3506176. Throughput: 0: 755.2. Samples: 876912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:02:11,783][00929] Avg episode reward: [(0, '24.721')] |
|
[2023-08-30 10:02:16,536][08361] Updated weights for policy 0, policy_version 860 (0.0016) |
|
[2023-08-30 10:02:16,775][00929] Fps is (10 sec: 3277.7, 60 sec: 3003.7, 300 sec: 3013.0). Total num frames: 3522560. Throughput: 0: 763.1. Samples: 879084. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:02:16,782][00929] Avg episode reward: [(0, '24.812')] |
|
[2023-08-30 10:02:21,775][00929] Fps is (10 sec: 3278.5, 60 sec: 3003.8, 300 sec: 3026.9). Total num frames: 3538944. Throughput: 0: 771.6. Samples: 884782. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 10:02:21,777][00929] Avg episode reward: [(0, '24.940')] |
|
[2023-08-30 10:02:26,775][00929] Fps is (10 sec: 3276.7, 60 sec: 3140.6, 300 sec: 3013.0). Total num frames: 3555328. Throughput: 0: 754.6. Samples: 889448. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:02:26,778][00929] Avg episode reward: [(0, '23.564')] |
|
[2023-08-30 10:02:29,767][08361] Updated weights for policy 0, policy_version 870 (0.0023) |
|
[2023-08-30 10:02:31,775][00929] Fps is (10 sec: 2867.3, 60 sec: 3072.0, 300 sec: 2999.1). Total num frames: 3567616. Throughput: 0: 755.5. Samples: 891304. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:02:31,777][00929] Avg episode reward: [(0, '22.765')] |
|
[2023-08-30 10:02:36,775][00929] Fps is (10 sec: 2457.7, 60 sec: 3003.7, 300 sec: 2999.1). Total num frames: 3579904. Throughput: 0: 758.7. Samples: 895052. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 10:02:36,780][00929] Avg episode reward: [(0, '23.878')] |
|
[2023-08-30 10:02:41,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.0, 300 sec: 3026.9). Total num frames: 3600384. Throughput: 0: 769.9. Samples: 900724. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:02:41,780][00929] Avg episode reward: [(0, '23.860')] |
|
[2023-08-30 10:02:42,507][08361] Updated weights for policy 0, policy_version 880 (0.0025) |
|
[2023-08-30 10:02:46,779][00929] Fps is (10 sec: 3685.0, 60 sec: 3071.8, 300 sec: 3013.0). Total num frames: 3616768. Throughput: 0: 770.1. Samples: 903574. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 10:02:46,781][00929] Avg episode reward: [(0, '25.106')] |
|
[2023-08-30 10:02:51,777][00929] Fps is (10 sec: 2866.5, 60 sec: 3140.1, 300 sec: 2985.2). Total num frames: 3629056. Throughput: 0: 754.9. Samples: 907194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:02:51,780][00929] Avg episode reward: [(0, '25.137')] |
|
[2023-08-30 10:02:56,775][00929] Fps is (10 sec: 2458.6, 60 sec: 3003.7, 300 sec: 2985.2). Total num frames: 3641344. Throughput: 0: 754.0. Samples: 910840. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2023-08-30 10:02:56,783][00929] Avg episode reward: [(0, '25.695')] |
|
[2023-08-30 10:02:57,422][08361] Updated weights for policy 0, policy_version 890 (0.0043) |
|
[2023-08-30 10:03:01,775][00929] Fps is (10 sec: 2867.9, 60 sec: 2935.5, 300 sec: 2999.1). Total num frames: 3657728. Throughput: 0: 769.2. Samples: 913698. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 10:03:01,780][00929] Avg episode reward: [(0, '25.637')] |
|
[2023-08-30 10:03:06,775][00929] Fps is (10 sec: 3686.4, 60 sec: 3140.4, 300 sec: 3013.0). Total num frames: 3678208. Throughput: 0: 770.8. Samples: 919468. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:06,780][00929] Avg episode reward: [(0, '26.118')] |
|
[2023-08-30 10:03:09,440][08361] Updated weights for policy 0, policy_version 900 (0.0013) |
|
[2023-08-30 10:03:11,775][00929] Fps is (10 sec: 3276.8, 60 sec: 3072.3, 300 sec: 2985.2). Total num frames: 3690496. Throughput: 0: 754.5. Samples: 923400. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:11,777][00929] Avg episode reward: [(0, '25.551')] |
|
[2023-08-30 10:03:16,775][00929] Fps is (10 sec: 2457.5, 60 sec: 3003.7, 300 sec: 2971.3). Total num frames: 3702784. Throughput: 0: 754.3. Samples: 925250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:03:16,783][00929] Avg episode reward: [(0, '24.363')] |
|
[2023-08-30 10:03:21,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3003.8, 300 sec: 2999.1). Total num frames: 3719168. Throughput: 0: 769.5. Samples: 929680. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 10:03:21,778][00929] Avg episode reward: [(0, '24.707')] |
|
[2023-08-30 10:03:23,137][08361] Updated weights for policy 0, policy_version 910 (0.0014) |
|
[2023-08-30 10:03:26,775][00929] Fps is (10 sec: 3686.5, 60 sec: 3072.0, 300 sec: 3013.0). Total num frames: 3739648. Throughput: 0: 769.9. Samples: 935368. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2023-08-30 10:03:26,780][00929] Avg episode reward: [(0, '25.228')] |
|
[2023-08-30 10:03:31,776][00929] Fps is (10 sec: 3276.6, 60 sec: 3072.0, 300 sec: 2985.2). Total num frames: 3751936. Throughput: 0: 758.5. Samples: 937704. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2023-08-30 10:03:31,778][00929] Avg episode reward: [(0, '24.362')] |
|
[2023-08-30 10:03:36,776][00929] Fps is (10 sec: 2457.4, 60 sec: 3072.0, 300 sec: 2971.3). Total num frames: 3764224. Throughput: 0: 758.9. Samples: 941344. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:36,780][00929] Avg episode reward: [(0, '25.041')] |
|
[2023-08-30 10:03:37,596][08361] Updated weights for policy 0, policy_version 920 (0.0020) |
|
[2023-08-30 10:03:41,775][00929] Fps is (10 sec: 2457.8, 60 sec: 2935.5, 300 sec: 2971.3). Total num frames: 3776512. Throughput: 0: 765.5. Samples: 945288. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:41,783][00929] Avg episode reward: [(0, '24.240')] |
|
[2023-08-30 10:03:46,775][00929] Fps is (10 sec: 3277.0, 60 sec: 3003.9, 300 sec: 2999.1). Total num frames: 3796992. Throughput: 0: 763.1. Samples: 948038. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:46,782][00929] Avg episode reward: [(0, '25.614')] |
|
[2023-08-30 10:03:49,535][08361] Updated weights for policy 0, policy_version 930 (0.0021) |
|
[2023-08-30 10:03:51,780][00929] Fps is (10 sec: 3684.6, 60 sec: 3071.9, 300 sec: 2985.2). Total num frames: 3813376. Throughput: 0: 754.5. Samples: 953422. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:03:51,785][00929] Avg episode reward: [(0, '25.186')] |
|
[2023-08-30 10:03:56,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3072.0, 300 sec: 2971.3). Total num frames: 3825664. Throughput: 0: 745.2. Samples: 956934. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:03:56,777][00929] Avg episode reward: [(0, '24.931')] |
|
[2023-08-30 10:04:01,777][00929] Fps is (10 sec: 2458.3, 60 sec: 3003.6, 300 sec: 2957.4). Total num frames: 3837952. Throughput: 0: 741.8. Samples: 958630. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:04:01,782][00929] Avg episode reward: [(0, '25.353')] |
|
[2023-08-30 10:04:01,794][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000937_3837952.pth... |
|
[2023-08-30 10:04:01,944][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000765_3133440.pth |
|
[2023-08-30 10:04:04,927][08361] Updated weights for policy 0, policy_version 940 (0.0015) |
|
[2023-08-30 10:04:06,775][00929] Fps is (10 sec: 2867.2, 60 sec: 2935.5, 300 sec: 2971.3). Total num frames: 3854336. Throughput: 0: 753.4. Samples: 963582. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:04:06,778][00929] Avg episode reward: [(0, '24.709')] |
|
[2023-08-30 10:04:11,784][00929] Fps is (10 sec: 3683.8, 60 sec: 3071.5, 300 sec: 2985.1). Total num frames: 3874816. Throughput: 0: 749.9. Samples: 969120. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:04:11,787][00929] Avg episode reward: [(0, '23.791')] |
|
[2023-08-30 10:04:16,779][00929] Fps is (10 sec: 2866.1, 60 sec: 3003.6, 300 sec: 2957.4). Total num frames: 3883008. Throughput: 0: 738.5. Samples: 970940. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2023-08-30 10:04:16,789][00929] Avg episode reward: [(0, '23.441')] |
|
[2023-08-30 10:04:18,662][08361] Updated weights for policy 0, policy_version 950 (0.0025) |
|
[2023-08-30 10:04:21,775][00929] Fps is (10 sec: 2459.7, 60 sec: 3003.7, 300 sec: 2957.4). Total num frames: 3899392. Throughput: 0: 738.2. Samples: 974562. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2023-08-30 10:04:21,786][00929] Avg episode reward: [(0, '23.148')] |
|
[2023-08-30 10:04:26,775][00929] Fps is (10 sec: 3278.1, 60 sec: 2935.5, 300 sec: 2971.3). Total num frames: 3915776. Throughput: 0: 756.7. Samples: 979338. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:04:26,784][00929] Avg episode reward: [(0, '23.885')] |
|
[2023-08-30 10:04:30,897][08361] Updated weights for policy 0, policy_version 960 (0.0019) |
|
[2023-08-30 10:04:31,775][00929] Fps is (10 sec: 3276.9, 60 sec: 3003.8, 300 sec: 2985.2). Total num frames: 3932160. Throughput: 0: 759.7. Samples: 982224. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2023-08-30 10:04:31,784][00929] Avg episode reward: [(0, '25.048')] |
|
[2023-08-30 10:04:36,775][00929] Fps is (10 sec: 2867.2, 60 sec: 3003.8, 300 sec: 2957.5). Total num frames: 3944448. Throughput: 0: 733.8. Samples: 986438. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:04:36,782][00929] Avg episode reward: [(0, '24.974')] |
|
[2023-08-30 10:04:41,778][00929] Fps is (10 sec: 2047.4, 60 sec: 2935.3, 300 sec: 2915.8). Total num frames: 3952640. Throughput: 0: 715.5. Samples: 989134. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:04:41,788][00929] Avg episode reward: [(0, '24.900')] |
|
[2023-08-30 10:04:46,775][00929] Fps is (10 sec: 1638.4, 60 sec: 2730.7, 300 sec: 2902.0). Total num frames: 3960832. Throughput: 0: 708.3. Samples: 990504. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2023-08-30 10:04:46,780][00929] Avg episode reward: [(0, '25.070')] |
|
[2023-08-30 10:04:50,775][08361] Updated weights for policy 0, policy_version 970 (0.0042) |
|
[2023-08-30 10:04:51,775][00929] Fps is (10 sec: 2048.5, 60 sec: 2662.6, 300 sec: 2901.9). Total num frames: 3973120. Throughput: 0: 667.0. Samples: 993598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:04:51,784][00929] Avg episode reward: [(0, '24.310')] |
|
[2023-08-30 10:04:56,775][00929] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2915.8). Total num frames: 3993600. Throughput: 0: 669.7. Samples: 999250. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2023-08-30 10:04:56,777][00929] Avg episode reward: [(0, '25.254')] |
|
[2023-08-30 10:04:59,316][08348] Stopping Batcher_0... |
|
[2023-08-30 10:04:59,317][08348] Loop batcher_evt_loop terminating... |
|
[2023-08-30 10:04:59,317][00929] Component Batcher_0 stopped! |
|
[2023-08-30 10:04:59,326][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:04:59,396][00929] Component RolloutWorker_w4 stopped! |
|
[2023-08-30 10:04:59,405][08366] Stopping RolloutWorker_w4... |
|
[2023-08-30 10:04:59,405][08366] Loop rollout_proc4_evt_loop terminating... |
|
[2023-08-30 10:04:59,397][08361] Weights refcount: 2 0 |
|
[2023-08-30 10:04:59,414][08361] Stopping InferenceWorker_p0-w0... |
|
[2023-08-30 10:04:59,414][00929] Component InferenceWorker_p0-w0 stopped! |
|
[2023-08-30 10:04:59,433][00929] Component RolloutWorker_w6 stopped! |
|
[2023-08-30 10:04:59,435][00929] Component RolloutWorker_w3 stopped! |
|
[2023-08-30 10:04:59,439][08365] Stopping RolloutWorker_w3... |
|
[2023-08-30 10:04:59,440][08365] Loop rollout_proc3_evt_loop terminating... |
|
[2023-08-30 10:04:59,446][00929] Component RolloutWorker_w1 stopped! |
|
[2023-08-30 10:04:59,447][08363] Stopping RolloutWorker_w1... |
|
[2023-08-30 10:04:59,451][08361] Loop inference_proc0-0_evt_loop terminating... |
|
[2023-08-30 10:04:59,455][08363] Loop rollout_proc1_evt_loop terminating... |
|
[2023-08-30 10:04:59,433][08368] Stopping RolloutWorker_w6... |
|
[2023-08-30 10:04:59,457][08368] Loop rollout_proc6_evt_loop terminating... |
|
[2023-08-30 10:04:59,457][00929] Component RolloutWorker_w5 stopped! |
|
[2023-08-30 10:04:59,459][08367] Stopping RolloutWorker_w5... |
|
[2023-08-30 10:04:59,463][00929] Component RolloutWorker_w7 stopped! |
|
[2023-08-30 10:04:59,465][08369] Stopping RolloutWorker_w7... |
|
[2023-08-30 10:04:59,459][08367] Loop rollout_proc5_evt_loop terminating... |
|
[2023-08-30 10:04:59,467][08369] Loop rollout_proc7_evt_loop terminating... |
|
[2023-08-30 10:04:59,477][08364] Stopping RolloutWorker_w2... |
|
[2023-08-30 10:04:59,477][00929] Component RolloutWorker_w2 stopped! |
|
[2023-08-30 10:04:59,482][08364] Loop rollout_proc2_evt_loop terminating... |
|
[2023-08-30 10:04:59,522][08362] Stopping RolloutWorker_w0... |
|
[2023-08-30 10:04:59,523][08362] Loop rollout_proc0_evt_loop terminating... |
|
[2023-08-30 10:04:59,522][00929] Component RolloutWorker_w0 stopped! |
|
[2023-08-30 10:04:59,548][08348] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000850_3481600.pth |
|
[2023-08-30 10:04:59,562][08348] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:04:59,747][08348] Stopping LearnerWorker_p0... |
|
[2023-08-30 10:04:59,747][00929] Component LearnerWorker_p0 stopped! |
|
[2023-08-30 10:04:59,748][08348] Loop learner_proc0_evt_loop terminating... |
|
[2023-08-30 10:04:59,749][00929] Waiting for process learner_proc0 to stop... |
|
[2023-08-30 10:05:02,005][00929] Waiting for process inference_proc0-0 to join... |
|
[2023-08-30 10:05:02,008][00929] Waiting for process rollout_proc0 to join... |
|
[2023-08-30 10:05:04,717][00929] Waiting for process rollout_proc1 to join... |
|
[2023-08-30 10:05:04,720][00929] Waiting for process rollout_proc2 to join... |
|
[2023-08-30 10:05:04,721][00929] Waiting for process rollout_proc3 to join... |
|
[2023-08-30 10:05:04,723][00929] Waiting for process rollout_proc4 to join... |
|
[2023-08-30 10:05:04,725][00929] Waiting for process rollout_proc5 to join... |
|
[2023-08-30 10:05:04,727][00929] Waiting for process rollout_proc6 to join... |
|
[2023-08-30 10:05:04,728][00929] Waiting for process rollout_proc7 to join... |
|
[2023-08-30 10:05:04,729][00929] Batcher 0 profile tree view: |
|
batching: 30.0293, releasing_batches: 0.0235 |
|
[2023-08-30 10:05:04,736][00929] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 599.3278 |
|
update_model: 8.9797 |
|
weight_update: 0.0040 |
|
one_step: 0.0027 |
|
handle_policy_step: 680.8084 |
|
deserialize: 17.8320, stack: 3.3653, obs_to_device_normalize: 129.3780, forward: 377.1985, send_messages: 32.1604 |
|
prepare_outputs: 88.6094 |
|
to_cpu: 50.6414 |
|
[2023-08-30 10:05:04,737][00929] Learner 0 profile tree view: |
|
misc: 0.0059, prepare_batch: 19.9335 |
|
train: 77.3170 |
|
epoch_init: 0.0083, minibatch_init: 0.0135, losses_postprocess: 0.6532, kl_divergence: 0.6640, after_optimizer: 4.2534 |
|
calculate_losses: 26.5385 |
|
losses_init: 0.0041, forward_head: 1.4542, bptt_initial: 17.1643, tail: 1.2550, advantages_returns: 0.3518, losses: 3.6558 |
|
bptt: 2.3040 |
|
bptt_forward_core: 2.1865 |
|
update: 44.5742 |
|
clip: 32.7087 |
|
[2023-08-30 10:05:04,738][00929] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.3928, enqueue_policy_requests: 176.7038, env_step: 1003.2468, overhead: 28.5822, complete_rollouts: 8.6126 |
|
save_policy_outputs: 27.1381 |
|
split_output_tensors: 12.6970 |
|
[2023-08-30 10:05:04,740][00929] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3993, enqueue_policy_requests: 186.2463, env_step: 996.2913, overhead: 27.8348, complete_rollouts: 8.0003 |
|
save_policy_outputs: 25.0977 |
|
split_output_tensors: 11.7235 |
|
[2023-08-30 10:05:04,741][00929] Loop Runner_EvtLoop terminating... |
|
[2023-08-30 10:05:04,742][00929] Runner profile tree view: |
|
main_loop: 1377.1708 |
|
[2023-08-30 10:05:04,744][00929] Collected {0: 4005888}, FPS: 2908.8 |
|
[2023-08-30 10:05:15,846][00929] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-30 10:05:15,847][00929] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-30 10:05:15,851][00929] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-30 10:05:15,854][00929] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-30 10:05:15,855][00929] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-30 10:05:15,857][00929] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-30 10:05:15,858][00929] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-30 10:05:15,860][00929] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-30 10:05:15,861][00929] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2023-08-30 10:05:15,863][00929] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2023-08-30 10:05:15,873][00929] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-30 10:05:15,875][00929] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-30 10:05:15,876][00929] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-30 10:05:15,877][00929] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-30 10:05:15,878][00929] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-30 10:05:15,910][00929] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2023-08-30 10:05:15,916][00929] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 10:05:15,919][00929] RunningMeanStd input shape: (1,) |
|
[2023-08-30 10:05:15,935][00929] ConvEncoder: input_channels=3 |
|
[2023-08-30 10:05:16,057][00929] Conv encoder output size: 512 |
|
[2023-08-30 10:05:16,059][00929] Policy head output size: 512 |
|
[2023-08-30 10:05:19,349][00929] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:05:20,857][00929] Num frames 100... |
|
[2023-08-30 10:05:20,985][00929] Num frames 200... |
|
[2023-08-30 10:05:21,119][00929] Num frames 300... |
|
[2023-08-30 10:05:21,254][00929] Num frames 400... |
|
[2023-08-30 10:05:21,386][00929] Num frames 500... |
|
[2023-08-30 10:05:21,511][00929] Num frames 600... |
|
[2023-08-30 10:05:21,639][00929] Num frames 700... |
|
[2023-08-30 10:05:21,770][00929] Num frames 800... |
|
[2023-08-30 10:05:21,906][00929] Num frames 900... |
|
[2023-08-30 10:05:22,040][00929] Num frames 1000... |
|
[2023-08-30 10:05:22,173][00929] Num frames 1100... |
|
[2023-08-30 10:05:22,303][00929] Num frames 1200... |
|
[2023-08-30 10:05:22,430][00929] Num frames 1300... |
|
[2023-08-30 10:05:22,559][00929] Num frames 1400... |
|
[2023-08-30 10:05:22,691][00929] Num frames 1500... |
|
[2023-08-30 10:05:22,753][00929] Avg episode rewards: #0: 33.040, true rewards: #0: 15.040 |
|
[2023-08-30 10:05:22,755][00929] Avg episode reward: 33.040, avg true_objective: 15.040 |
|
[2023-08-30 10:05:22,884][00929] Num frames 1600... |
|
[2023-08-30 10:05:23,010][00929] Num frames 1700... |
|
[2023-08-30 10:05:23,141][00929] Num frames 1800... |
|
[2023-08-30 10:05:23,273][00929] Num frames 1900... |
|
[2023-08-30 10:05:23,404][00929] Num frames 2000... |
|
[2023-08-30 10:05:23,542][00929] Num frames 2100... |
|
[2023-08-30 10:05:23,665][00929] Num frames 2200... |
|
[2023-08-30 10:05:23,792][00929] Num frames 2300... |
|
[2023-08-30 10:05:23,918][00929] Num frames 2400... |
|
[2023-08-30 10:05:24,057][00929] Avg episode rewards: #0: 27.820, true rewards: #0: 12.320 |
|
[2023-08-30 10:05:24,059][00929] Avg episode reward: 27.820, avg true_objective: 12.320 |
|
[2023-08-30 10:05:24,123][00929] Num frames 2500... |
|
[2023-08-30 10:05:24,266][00929] Num frames 2600... |
|
[2023-08-30 10:05:24,409][00929] Num frames 2700... |
|
[2023-08-30 10:05:24,546][00929] Num frames 2800... |
|
[2023-08-30 10:05:24,686][00929] Num frames 2900... |
|
[2023-08-30 10:05:24,823][00929] Num frames 3000... |
|
[2023-08-30 10:05:24,967][00929] Num frames 3100... |
|
[2023-08-30 10:05:25,101][00929] Num frames 3200... |
|
[2023-08-30 10:05:25,234][00929] Num frames 3300... |
|
[2023-08-30 10:05:25,368][00929] Num frames 3400... |
|
[2023-08-30 10:05:25,502][00929] Num frames 3500... |
|
[2023-08-30 10:05:25,631][00929] Num frames 3600... |
|
[2023-08-30 10:05:25,761][00929] Num frames 3700... |
|
[2023-08-30 10:05:25,888][00929] Num frames 3800... |
|
[2023-08-30 10:05:26,019][00929] Num frames 3900... |
|
[2023-08-30 10:05:26,156][00929] Num frames 4000... |
|
[2023-08-30 10:05:26,295][00929] Avg episode rewards: #0: 34.213, true rewards: #0: 13.547 |
|
[2023-08-30 10:05:26,296][00929] Avg episode reward: 34.213, avg true_objective: 13.547 |
|
[2023-08-30 10:05:26,351][00929] Num frames 4100... |
|
[2023-08-30 10:05:26,492][00929] Num frames 4200... |
|
[2023-08-30 10:05:26,624][00929] Num frames 4300... |
|
[2023-08-30 10:05:26,764][00929] Num frames 4400... |
|
[2023-08-30 10:05:26,895][00929] Num frames 4500... |
|
[2023-08-30 10:05:27,024][00929] Num frames 4600... |
|
[2023-08-30 10:05:27,158][00929] Num frames 4700... |
|
[2023-08-30 10:05:27,290][00929] Num frames 4800... |
|
[2023-08-30 10:05:27,433][00929] Num frames 4900... |
|
[2023-08-30 10:05:27,569][00929] Num frames 5000... |
|
[2023-08-30 10:05:27,708][00929] Num frames 5100... |
|
[2023-08-30 10:05:27,800][00929] Avg episode rewards: #0: 31.570, true rewards: #0: 12.820 |
|
[2023-08-30 10:05:27,802][00929] Avg episode reward: 31.570, avg true_objective: 12.820 |
|
[2023-08-30 10:05:27,901][00929] Num frames 5200... |
|
[2023-08-30 10:05:28,030][00929] Num frames 5300... |
|
[2023-08-30 10:05:28,166][00929] Num frames 5400... |
|
[2023-08-30 10:05:28,305][00929] Num frames 5500... |
|
[2023-08-30 10:05:28,436][00929] Num frames 5600... |
|
[2023-08-30 10:05:28,567][00929] Num frames 5700... |
|
[2023-08-30 10:05:28,697][00929] Num frames 5800... |
|
[2023-08-30 10:05:28,825][00929] Num frames 5900... |
|
[2023-08-30 10:05:28,958][00929] Num frames 6000... |
|
[2023-08-30 10:05:29,084][00929] Num frames 6100... |
|
[2023-08-30 10:05:29,226][00929] Num frames 6200... |
|
[2023-08-30 10:05:29,353][00929] Num frames 6300... |
|
[2023-08-30 10:05:29,488][00929] Num frames 6400... |
|
[2023-08-30 10:05:29,618][00929] Num frames 6500... |
|
[2023-08-30 10:05:29,700][00929] Avg episode rewards: #0: 32.038, true rewards: #0: 13.038 |
|
[2023-08-30 10:05:29,702][00929] Avg episode reward: 32.038, avg true_objective: 13.038 |
|
[2023-08-30 10:05:29,825][00929] Num frames 6600... |
|
[2023-08-30 10:05:29,958][00929] Num frames 6700... |
|
[2023-08-30 10:05:30,158][00929] Num frames 6800... |
|
[2023-08-30 10:05:30,352][00929] Num frames 6900... |
|
[2023-08-30 10:05:30,541][00929] Num frames 7000... |
|
[2023-08-30 10:05:30,726][00929] Num frames 7100... |
|
[2023-08-30 10:05:30,907][00929] Num frames 7200... |
|
[2023-08-30 10:05:31,090][00929] Num frames 7300... |
|
[2023-08-30 10:05:31,278][00929] Num frames 7400... |
|
[2023-08-30 10:05:31,470][00929] Num frames 7500... |
|
[2023-08-30 10:05:31,668][00929] Num frames 7600... |
|
[2023-08-30 10:05:31,742][00929] Avg episode rewards: #0: 30.175, true rewards: #0: 12.675 |
|
[2023-08-30 10:05:31,744][00929] Avg episode reward: 30.175, avg true_objective: 12.675 |
|
[2023-08-30 10:05:31,945][00929] Num frames 7700... |
|
[2023-08-30 10:05:32,133][00929] Num frames 7800... |
|
[2023-08-30 10:05:32,330][00929] Num frames 7900... |
|
[2023-08-30 10:05:32,524][00929] Num frames 8000... |
|
[2023-08-30 10:05:32,718][00929] Num frames 8100... |
|
[2023-08-30 10:05:32,905][00929] Num frames 8200... |
|
[2023-08-30 10:05:33,095][00929] Num frames 8300... |
|
[2023-08-30 10:05:33,288][00929] Num frames 8400... |
|
[2023-08-30 10:05:33,485][00929] Num frames 8500... |
|
[2023-08-30 10:05:33,672][00929] Num frames 8600... |
|
[2023-08-30 10:05:33,862][00929] Num frames 8700... |
|
[2023-08-30 10:05:33,971][00929] Avg episode rewards: #0: 29.321, true rewards: #0: 12.464 |
|
[2023-08-30 10:05:33,973][00929] Avg episode reward: 29.321, avg true_objective: 12.464 |
|
[2023-08-30 10:05:34,077][00929] Num frames 8800... |
|
[2023-08-30 10:05:34,217][00929] Num frames 8900... |
|
[2023-08-30 10:05:34,367][00929] Num frames 9000... |
|
[2023-08-30 10:05:34,506][00929] Num frames 9100... |
|
[2023-08-30 10:05:34,642][00929] Num frames 9200... |
|
[2023-08-30 10:05:34,769][00929] Num frames 9300... |
|
[2023-08-30 10:05:34,898][00929] Num frames 9400... |
|
[2023-08-30 10:05:35,028][00929] Num frames 9500... |
|
[2023-08-30 10:05:35,157][00929] Avg episode rewards: #0: 27.696, true rewards: #0: 11.946 |
|
[2023-08-30 10:05:35,159][00929] Avg episode reward: 27.696, avg true_objective: 11.946 |
|
[2023-08-30 10:05:35,218][00929] Num frames 9600... |
|
[2023-08-30 10:05:35,350][00929] Num frames 9700... |
|
[2023-08-30 10:05:35,495][00929] Num frames 9800... |
|
[2023-08-30 10:05:35,627][00929] Num frames 9900... |
|
[2023-08-30 10:05:35,757][00929] Num frames 10000... |
|
[2023-08-30 10:05:35,885][00929] Num frames 10100... |
|
[2023-08-30 10:05:36,018][00929] Num frames 10200... |
|
[2023-08-30 10:05:36,195][00929] Avg episode rewards: #0: 25.992, true rewards: #0: 11.437 |
|
[2023-08-30 10:05:36,197][00929] Avg episode reward: 25.992, avg true_objective: 11.437 |
|
[2023-08-30 10:05:36,210][00929] Num frames 10300... |
|
[2023-08-30 10:05:36,341][00929] Num frames 10400... |
|
[2023-08-30 10:05:36,476][00929] Num frames 10500... |
|
[2023-08-30 10:05:36,604][00929] Num frames 10600... |
|
[2023-08-30 10:05:36,734][00929] Num frames 10700... |
|
[2023-08-30 10:05:36,862][00929] Num frames 10800... |
|
[2023-08-30 10:05:36,991][00929] Num frames 10900... |
|
[2023-08-30 10:05:37,120][00929] Num frames 11000... |
|
[2023-08-30 10:05:37,252][00929] Num frames 11100... |
|
[2023-08-30 10:05:37,383][00929] Num frames 11200... |
|
[2023-08-30 10:05:37,519][00929] Num frames 11300... |
|
[2023-08-30 10:05:37,600][00929] Avg episode rewards: #0: 25.517, true rewards: #0: 11.317 |
|
[2023-08-30 10:05:37,602][00929] Avg episode reward: 25.517, avg true_objective: 11.317 |
|
[2023-08-30 10:06:55,782][00929] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-08-30 10:11:32,476][00929] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-30 10:11:32,477][00929] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-30 10:11:32,479][00929] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-30 10:11:32,482][00929] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-30 10:11:32,484][00929] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-30 10:11:32,486][00929] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-30 10:11:32,492][00929] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-08-30 10:11:32,493][00929] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-30 10:11:32,494][00929] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-08-30 10:11:32,495][00929] Adding new argument 'hf_repository'='AdanLee/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2023-08-30 10:11:32,496][00929] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-30 10:11:32,497][00929] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-30 10:11:32,499][00929] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-30 10:11:32,500][00929] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-30 10:11:32,501][00929] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-30 10:11:32,535][00929] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 10:11:32,536][00929] RunningMeanStd input shape: (1,) |
|
[2023-08-30 10:11:32,550][00929] ConvEncoder: input_channels=3 |
|
[2023-08-30 10:11:32,586][00929] Conv encoder output size: 512 |
|
[2023-08-30 10:11:32,588][00929] Policy head output size: 512 |
|
[2023-08-30 10:11:32,607][00929] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:11:33,069][00929] Num frames 100... |
|
[2023-08-30 10:11:33,210][00929] Num frames 200... |
|
[2023-08-30 10:11:33,348][00929] Num frames 300... |
|
[2023-08-30 10:11:33,474][00929] Num frames 400... |
|
[2023-08-30 10:11:33,598][00929] Num frames 500... |
|
[2023-08-30 10:11:33,730][00929] Num frames 600... |
|
[2023-08-30 10:11:33,873][00929] Num frames 700... |
|
[2023-08-30 10:11:34,007][00929] Num frames 800... |
|
[2023-08-30 10:11:34,136][00929] Num frames 900... |
|
[2023-08-30 10:11:34,268][00929] Num frames 1000... |
|
[2023-08-30 10:11:34,405][00929] Num frames 1100... |
|
[2023-08-30 10:11:34,540][00929] Num frames 1200... |
|
[2023-08-30 10:11:34,676][00929] Num frames 1300... |
|
[2023-08-30 10:11:34,807][00929] Num frames 1400... |
|
[2023-08-30 10:11:34,938][00929] Num frames 1500... |
|
[2023-08-30 10:11:35,067][00929] Num frames 1600... |
|
[2023-08-30 10:11:35,205][00929] Num frames 1700... |
|
[2023-08-30 10:11:35,335][00929] Num frames 1800... |
|
[2023-08-30 10:11:35,470][00929] Avg episode rewards: #0: 46.559, true rewards: #0: 18.560 |
|
[2023-08-30 10:11:35,472][00929] Avg episode reward: 46.559, avg true_objective: 18.560 |
|
[2023-08-30 10:11:35,534][00929] Num frames 1900... |
|
[2023-08-30 10:11:35,667][00929] Num frames 2000... |
|
[2023-08-30 10:11:35,799][00929] Num frames 2100... |
|
[2023-08-30 10:11:35,936][00929] Num frames 2200... |
|
[2023-08-30 10:11:36,064][00929] Num frames 2300... |
|
[2023-08-30 10:11:36,258][00929] Num frames 2400... |
|
[2023-08-30 10:11:36,470][00929] Num frames 2500... |
|
[2023-08-30 10:11:36,659][00929] Num frames 2600... |
|
[2023-08-30 10:11:36,861][00929] Num frames 2700... |
|
[2023-08-30 10:11:37,058][00929] Num frames 2800... |
|
[2023-08-30 10:11:37,251][00929] Num frames 2900... |
|
[2023-08-30 10:11:37,454][00929] Num frames 3000... |
|
[2023-08-30 10:11:37,636][00929] Num frames 3100... |
|
[2023-08-30 10:11:37,852][00929] Num frames 3200... |
|
[2023-08-30 10:11:38,044][00929] Num frames 3300... |
|
[2023-08-30 10:11:38,235][00929] Num frames 3400... |
|
[2023-08-30 10:11:38,430][00929] Num frames 3500... |
|
[2023-08-30 10:11:38,605][00929] Avg episode rewards: #0: 43.260, true rewards: #0: 17.760 |
|
[2023-08-30 10:11:38,608][00929] Avg episode reward: 43.260, avg true_objective: 17.760 |
|
[2023-08-30 10:11:38,711][00929] Num frames 3600... |
|
[2023-08-30 10:11:38,903][00929] Num frames 3700... |
|
[2023-08-30 10:11:39,109][00929] Num frames 3800... |
|
[2023-08-30 10:11:39,298][00929] Num frames 3900... |
|
[2023-08-30 10:11:39,493][00929] Num frames 4000... |
|
[2023-08-30 10:11:39,692][00929] Num frames 4100... |
|
[2023-08-30 10:11:39,885][00929] Num frames 4200... |
|
[2023-08-30 10:11:40,115][00929] Avg episode rewards: #0: 33.626, true rewards: #0: 14.293 |
|
[2023-08-30 10:11:40,117][00929] Avg episode reward: 33.626, avg true_objective: 14.293 |
|
[2023-08-30 10:11:40,147][00929] Num frames 4300... |
|
[2023-08-30 10:11:40,330][00929] Num frames 4400... |
|
[2023-08-30 10:11:40,460][00929] Num frames 4500... |
|
[2023-08-30 10:11:40,595][00929] Num frames 4600... |
|
[2023-08-30 10:11:40,725][00929] Num frames 4700... |
|
[2023-08-30 10:11:40,862][00929] Num frames 4800... |
|
[2023-08-30 10:11:40,995][00929] Num frames 4900... |
|
[2023-08-30 10:11:41,123][00929] Num frames 5000... |
|
[2023-08-30 10:11:41,260][00929] Num frames 5100... |
|
[2023-08-30 10:11:41,393][00929] Num frames 5200... |
|
[2023-08-30 10:11:41,528][00929] Num frames 5300... |
|
[2023-08-30 10:11:41,696][00929] Num frames 5400... |
|
[2023-08-30 10:11:41,836][00929] Num frames 5500... |
|
[2023-08-30 10:11:41,968][00929] Num frames 5600... |
|
[2023-08-30 10:11:42,103][00929] Num frames 5700... |
|
[2023-08-30 10:11:42,155][00929] Avg episode rewards: #0: 34.250, true rewards: #0: 14.250 |
|
[2023-08-30 10:11:42,156][00929] Avg episode reward: 34.250, avg true_objective: 14.250 |
|
[2023-08-30 10:11:42,295][00929] Num frames 5800... |
|
[2023-08-30 10:11:42,428][00929] Num frames 5900... |
|
[2023-08-30 10:11:42,556][00929] Num frames 6000... |
|
[2023-08-30 10:11:42,693][00929] Num frames 6100... |
|
[2023-08-30 10:11:42,825][00929] Num frames 6200... |
|
[2023-08-30 10:11:42,958][00929] Num frames 6300... |
|
[2023-08-30 10:11:43,094][00929] Num frames 6400... |
|
[2023-08-30 10:11:43,225][00929] Num frames 6500... |
|
[2023-08-30 10:11:43,370][00929] Num frames 6600... |
|
[2023-08-30 10:11:43,501][00929] Num frames 6700... |
|
[2023-08-30 10:11:43,640][00929] Num frames 6800... |
|
[2023-08-30 10:11:43,774][00929] Num frames 6900... |
|
[2023-08-30 10:11:43,905][00929] Num frames 7000... |
|
[2023-08-30 10:11:43,987][00929] Avg episode rewards: #0: 34.038, true rewards: #0: 14.038 |
|
[2023-08-30 10:11:43,989][00929] Avg episode reward: 34.038, avg true_objective: 14.038 |
|
[2023-08-30 10:11:44,112][00929] Num frames 7100... |
|
[2023-08-30 10:11:44,272][00929] Num frames 7200... |
|
[2023-08-30 10:11:44,430][00929] Avg episode rewards: #0: 29.125, true rewards: #0: 12.125 |
|
[2023-08-30 10:11:44,432][00929] Avg episode reward: 29.125, avg true_objective: 12.125 |
|
[2023-08-30 10:11:44,470][00929] Num frames 7300... |
|
[2023-08-30 10:11:44,613][00929] Num frames 7400... |
|
[2023-08-30 10:11:44,764][00929] Num frames 7500... |
|
[2023-08-30 10:11:44,895][00929] Num frames 7600... |
|
[2023-08-30 10:11:45,027][00929] Num frames 7700... |
|
[2023-08-30 10:11:45,151][00929] Num frames 7800... |
|
[2023-08-30 10:11:45,233][00929] Avg episode rewards: #0: 26.170, true rewards: #0: 11.170 |
|
[2023-08-30 10:11:45,234][00929] Avg episode reward: 26.170, avg true_objective: 11.170 |
|
[2023-08-30 10:11:45,349][00929] Num frames 7900... |
|
[2023-08-30 10:11:45,486][00929] Num frames 8000... |
|
[2023-08-30 10:11:45,610][00929] Num frames 8100... |
|
[2023-08-30 10:11:45,743][00929] Num frames 8200... |
|
[2023-08-30 10:11:45,877][00929] Num frames 8300... |
|
[2023-08-30 10:11:46,009][00929] Num frames 8400... |
|
[2023-08-30 10:11:46,146][00929] Num frames 8500... |
|
[2023-08-30 10:11:46,279][00929] Num frames 8600... |
|
[2023-08-30 10:11:46,417][00929] Num frames 8700... |
|
[2023-08-30 10:11:46,549][00929] Num frames 8800... |
|
[2023-08-30 10:11:46,677][00929] Num frames 8900... |
|
[2023-08-30 10:11:46,815][00929] Num frames 9000... |
|
[2023-08-30 10:11:46,949][00929] Num frames 9100... |
|
[2023-08-30 10:11:47,080][00929] Num frames 9200... |
|
[2023-08-30 10:11:47,223][00929] Num frames 9300... |
|
[2023-08-30 10:11:47,356][00929] Num frames 9400... |
|
[2023-08-30 10:11:47,523][00929] Avg episode rewards: #0: 27.479, true rewards: #0: 11.854 |
|
[2023-08-30 10:11:47,525][00929] Avg episode reward: 27.479, avg true_objective: 11.854 |
|
[2023-08-30 10:11:47,553][00929] Num frames 9500... |
|
[2023-08-30 10:11:47,680][00929] Num frames 9600... |
|
[2023-08-30 10:11:47,823][00929] Num frames 9700... |
|
[2023-08-30 10:11:47,962][00929] Num frames 9800... |
|
[2023-08-30 10:11:48,111][00929] Num frames 9900... |
|
[2023-08-30 10:11:48,251][00929] Num frames 10000... |
|
[2023-08-30 10:11:48,401][00929] Num frames 10100... |
|
[2023-08-30 10:11:48,543][00929] Num frames 10200... |
|
[2023-08-30 10:11:48,681][00929] Num frames 10300... |
|
[2023-08-30 10:11:48,821][00929] Num frames 10400... |
|
[2023-08-30 10:11:48,952][00929] Num frames 10500... |
|
[2023-08-30 10:11:49,089][00929] Num frames 10600... |
|
[2023-08-30 10:11:49,226][00929] Num frames 10700... |
|
[2023-08-30 10:11:49,365][00929] Avg episode rewards: #0: 27.737, true rewards: #0: 11.959 |
|
[2023-08-30 10:11:49,367][00929] Avg episode reward: 27.737, avg true_objective: 11.959 |
|
[2023-08-30 10:11:49,430][00929] Num frames 10800... |
|
[2023-08-30 10:11:49,623][00929] Num frames 10900... |
|
[2023-08-30 10:11:49,840][00929] Num frames 11000... |
|
[2023-08-30 10:11:50,029][00929] Num frames 11100... |
|
[2023-08-30 10:11:50,230][00929] Num frames 11200... |
|
[2023-08-30 10:11:50,447][00929] Num frames 11300... |
|
[2023-08-30 10:11:50,651][00929] Num frames 11400... |
|
[2023-08-30 10:11:50,716][00929] Avg episode rewards: #0: 26.303, true rewards: #0: 11.403 |
|
[2023-08-30 10:11:50,719][00929] Avg episode reward: 26.303, avg true_objective: 11.403 |
|
[2023-08-30 10:13:12,913][00929] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-08-30 10:13:48,687][00929] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-30 10:13:48,689][00929] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-30 10:13:48,691][00929] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-30 10:13:48,693][00929] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-30 10:13:48,694][00929] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-30 10:13:48,696][00929] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-30 10:13:48,697][00929] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-08-30 10:13:48,700][00929] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-30 10:13:48,701][00929] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-08-30 10:13:48,702][00929] Adding new argument 'hf_repository'='AdanLee/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2023-08-30 10:13:48,703][00929] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-30 10:13:48,704][00929] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-30 10:13:48,705][00929] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-30 10:13:48,706][00929] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-30 10:13:48,707][00929] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-30 10:13:48,754][00929] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 10:13:48,757][00929] RunningMeanStd input shape: (1,) |
|
[2023-08-30 10:13:48,770][00929] ConvEncoder: input_channels=3 |
|
[2023-08-30 10:13:48,807][00929] Conv encoder output size: 512 |
|
[2023-08-30 10:13:48,809][00929] Policy head output size: 512 |
|
[2023-08-30 10:13:48,827][00929] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:13:49,409][00929] Num frames 100... |
|
[2023-08-30 10:13:49,605][00929] Num frames 200... |
|
[2023-08-30 10:13:49,796][00929] Num frames 300... |
|
[2023-08-30 10:13:49,979][00929] Num frames 400... |
|
[2023-08-30 10:13:50,169][00929] Num frames 500... |
|
[2023-08-30 10:13:50,367][00929] Num frames 600... |
|
[2023-08-30 10:13:50,556][00929] Num frames 700... |
|
[2023-08-30 10:13:50,747][00929] Num frames 800... |
|
[2023-08-30 10:13:50,930][00929] Num frames 900... |
|
[2023-08-30 10:13:51,114][00929] Num frames 1000... |
|
[2023-08-30 10:13:51,304][00929] Num frames 1100... |
|
[2023-08-30 10:13:51,503][00929] Num frames 1200... |
|
[2023-08-30 10:13:51,605][00929] Avg episode rewards: #0: 28.200, true rewards: #0: 12.200 |
|
[2023-08-30 10:13:51,607][00929] Avg episode reward: 28.200, avg true_objective: 12.200 |
|
[2023-08-30 10:13:51,764][00929] Num frames 1300... |
|
[2023-08-30 10:13:51,957][00929] Num frames 1400... |
|
[2023-08-30 10:13:52,142][00929] Num frames 1500... |
|
[2023-08-30 10:13:52,343][00929] Num frames 1600... |
|
[2023-08-30 10:13:52,548][00929] Num frames 1700... |
|
[2023-08-30 10:13:52,742][00929] Num frames 1800... |
|
[2023-08-30 10:13:52,947][00929] Num frames 1900... |
|
[2023-08-30 10:13:53,141][00929] Num frames 2000... |
|
[2023-08-30 10:13:53,358][00929] Avg episode rewards: #0: 23.920, true rewards: #0: 10.420 |
|
[2023-08-30 10:13:53,360][00929] Avg episode reward: 23.920, avg true_objective: 10.420 |
|
[2023-08-30 10:13:53,387][00929] Num frames 2100... |
|
[2023-08-30 10:13:53,539][00929] Num frames 2200... |
|
[2023-08-30 10:13:53,677][00929] Num frames 2300... |
|
[2023-08-30 10:13:53,807][00929] Num frames 2400... |
|
[2023-08-30 10:13:53,937][00929] Num frames 2500... |
|
[2023-08-30 10:13:54,019][00929] Avg episode rewards: #0: 18.397, true rewards: #0: 8.397 |
|
[2023-08-30 10:13:54,021][00929] Avg episode reward: 18.397, avg true_objective: 8.397 |
|
[2023-08-30 10:13:54,137][00929] Num frames 2600... |
|
[2023-08-30 10:13:54,268][00929] Num frames 2700... |
|
[2023-08-30 10:13:54,393][00929] Num frames 2800... |
|
[2023-08-30 10:13:54,531][00929] Num frames 2900... |
|
[2023-08-30 10:13:54,661][00929] Num frames 3000... |
|
[2023-08-30 10:13:54,765][00929] Avg episode rewards: #0: 15.578, true rewards: #0: 7.577 |
|
[2023-08-30 10:13:54,766][00929] Avg episode reward: 15.578, avg true_objective: 7.577 |
|
[2023-08-30 10:13:54,857][00929] Num frames 3100... |
|
[2023-08-30 10:13:54,988][00929] Num frames 3200... |
|
[2023-08-30 10:13:55,121][00929] Num frames 3300... |
|
[2023-08-30 10:13:55,251][00929] Num frames 3400... |
|
[2023-08-30 10:13:55,391][00929] Num frames 3500... |
|
[2023-08-30 10:13:55,540][00929] Num frames 3600... |
|
[2023-08-30 10:13:55,673][00929] Num frames 3700... |
|
[2023-08-30 10:13:55,805][00929] Num frames 3800... |
|
[2023-08-30 10:13:55,935][00929] Num frames 3900... |
|
[2023-08-30 10:13:56,115][00929] Avg episode rewards: #0: 17.582, true rewards: #0: 7.982 |
|
[2023-08-30 10:13:56,117][00929] Avg episode reward: 17.582, avg true_objective: 7.982 |
|
[2023-08-30 10:13:56,135][00929] Num frames 4000... |
|
[2023-08-30 10:13:56,271][00929] Num frames 4100... |
|
[2023-08-30 10:13:56,407][00929] Num frames 4200... |
|
[2023-08-30 10:13:56,558][00929] Num frames 4300... |
|
[2023-08-30 10:13:56,703][00929] Num frames 4400... |
|
[2023-08-30 10:13:56,834][00929] Num frames 4500... |
|
[2023-08-30 10:13:56,966][00929] Num frames 4600... |
|
[2023-08-30 10:13:57,108][00929] Num frames 4700... |
|
[2023-08-30 10:13:57,259][00929] Num frames 4800... |
|
[2023-08-30 10:13:57,390][00929] Num frames 4900... |
|
[2023-08-30 10:13:57,533][00929] Num frames 5000... |
|
[2023-08-30 10:13:57,667][00929] Num frames 5100... |
|
[2023-08-30 10:13:57,796][00929] Num frames 5200... |
|
[2023-08-30 10:13:57,928][00929] Num frames 5300... |
|
[2023-08-30 10:13:58,060][00929] Num frames 5400... |
|
[2023-08-30 10:13:58,193][00929] Num frames 5500... |
|
[2023-08-30 10:13:58,340][00929] Num frames 5600... |
|
[2023-08-30 10:13:58,482][00929] Num frames 5700... |
|
[2023-08-30 10:13:58,619][00929] Num frames 5800... |
|
[2023-08-30 10:13:58,754][00929] Num frames 5900... |
|
[2023-08-30 10:13:58,864][00929] Avg episode rewards: #0: 22.737, true rewards: #0: 9.903 |
|
[2023-08-30 10:13:58,866][00929] Avg episode reward: 22.737, avg true_objective: 9.903 |
|
[2023-08-30 10:13:58,948][00929] Num frames 6000... |
|
[2023-08-30 10:13:59,087][00929] Num frames 6100... |
|
[2023-08-30 10:13:59,214][00929] Num frames 6200... |
|
[2023-08-30 10:13:59,364][00929] Num frames 6300... |
|
[2023-08-30 10:13:59,492][00929] Num frames 6400... |
|
[2023-08-30 10:13:59,633][00929] Num frames 6500... |
|
[2023-08-30 10:13:59,761][00929] Num frames 6600... |
|
[2023-08-30 10:13:59,892][00929] Num frames 6700... |
|
[2023-08-30 10:14:00,023][00929] Num frames 6800... |
|
[2023-08-30 10:14:00,157][00929] Num frames 6900... |
|
[2023-08-30 10:14:00,290][00929] Num frames 7000... |
|
[2023-08-30 10:14:00,386][00929] Avg episode rewards: #0: 23.186, true rewards: #0: 10.043 |
|
[2023-08-30 10:14:00,388][00929] Avg episode reward: 23.186, avg true_objective: 10.043 |
|
[2023-08-30 10:14:00,485][00929] Num frames 7100... |
|
[2023-08-30 10:14:00,626][00929] Num frames 7200... |
|
[2023-08-30 10:14:00,759][00929] Num frames 7300... |
|
[2023-08-30 10:14:00,888][00929] Num frames 7400... |
|
[2023-08-30 10:14:01,041][00929] Avg episode rewards: #0: 21.341, true rewards: #0: 9.341 |
|
[2023-08-30 10:14:01,043][00929] Avg episode reward: 21.341, avg true_objective: 9.341 |
|
[2023-08-30 10:14:01,086][00929] Num frames 7500... |
|
[2023-08-30 10:14:01,230][00929] Num frames 7600... |
|
[2023-08-30 10:14:01,374][00929] Num frames 7700... |
|
[2023-08-30 10:14:01,520][00929] Num frames 7800... |
|
[2023-08-30 10:14:01,671][00929] Num frames 7900... |
|
[2023-08-30 10:14:01,807][00929] Num frames 8000... |
|
[2023-08-30 10:14:01,888][00929] Avg episode rewards: #0: 20.020, true rewards: #0: 8.909 |
|
[2023-08-30 10:14:01,889][00929] Avg episode reward: 20.020, avg true_objective: 8.909 |
|
[2023-08-30 10:14:02,011][00929] Num frames 8100... |
|
[2023-08-30 10:14:02,150][00929] Num frames 8200... |
|
[2023-08-30 10:14:02,291][00929] Num frames 8300... |
|
[2023-08-30 10:14:02,431][00929] Num frames 8400... |
|
[2023-08-30 10:14:02,563][00929] Num frames 8500... |
|
[2023-08-30 10:14:02,702][00929] Num frames 8600... |
|
[2023-08-30 10:14:02,830][00929] Num frames 8700... |
|
[2023-08-30 10:14:02,962][00929] Num frames 8800... |
|
[2023-08-30 10:14:03,089][00929] Num frames 8900... |
|
[2023-08-30 10:14:03,232][00929] Num frames 9000... |
|
[2023-08-30 10:14:03,304][00929] Avg episode rewards: #0: 20.010, true rewards: #0: 9.010 |
|
[2023-08-30 10:14:03,306][00929] Avg episode reward: 20.010, avg true_objective: 9.010 |
|
[2023-08-30 10:15:06,700][00929] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2023-08-30 10:16:44,707][00929] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2023-08-30 10:16:44,708][00929] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2023-08-30 10:16:44,710][00929] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2023-08-30 10:16:44,712][00929] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2023-08-30 10:16:44,714][00929] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2023-08-30 10:16:44,715][00929] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2023-08-30 10:16:44,716][00929] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2023-08-30 10:16:44,717][00929] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2023-08-30 10:16:44,718][00929] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2023-08-30 10:16:44,720][00929] Adding new argument 'hf_repository'='AdanLee/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2023-08-30 10:16:44,721][00929] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2023-08-30 10:16:44,722][00929] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2023-08-30 10:16:44,723][00929] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2023-08-30 10:16:44,725][00929] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2023-08-30 10:16:44,726][00929] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2023-08-30 10:16:44,766][00929] RunningMeanStd input shape: (3, 72, 128) |
|
[2023-08-30 10:16:44,767][00929] RunningMeanStd input shape: (1,) |
|
[2023-08-30 10:16:44,781][00929] ConvEncoder: input_channels=3 |
|
[2023-08-30 10:16:44,819][00929] Conv encoder output size: 512 |
|
[2023-08-30 10:16:44,821][00929] Policy head output size: 512 |
|
[2023-08-30 10:16:44,841][00929] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2023-08-30 10:16:45,337][00929] Num frames 100... |
|
[2023-08-30 10:16:45,480][00929] Num frames 200... |
|
[2023-08-30 10:16:45,611][00929] Num frames 300... |
|
[2023-08-30 10:16:45,746][00929] Num frames 400... |
|
[2023-08-30 10:16:45,881][00929] Num frames 500... |
|
[2023-08-30 10:16:46,012][00929] Num frames 600... |
|
[2023-08-30 10:16:46,148][00929] Num frames 700... |
|
[2023-08-30 10:16:46,290][00929] Num frames 800... |
|
[2023-08-30 10:16:46,441][00929] Num frames 900... |
|
[2023-08-30 10:16:46,677][00929] Avg episode rewards: #0: 21.920, true rewards: #0: 9.920 |
|
[2023-08-30 10:16:46,680][00929] Avg episode reward: 21.920, avg true_objective: 9.920 |
|
[2023-08-30 10:16:46,701][00929] Num frames 1000... |
|
[2023-08-30 10:16:46,895][00929] Num frames 1100... |
|
[2023-08-30 10:16:47,098][00929] Num frames 1200... |
|
[2023-08-30 10:16:47,295][00929] Num frames 1300... |
|
[2023-08-30 10:16:47,513][00929] Num frames 1400... |
|
[2023-08-30 10:16:47,722][00929] Num frames 1500... |
|
[2023-08-30 10:16:47,921][00929] Num frames 1600... |
|
[2023-08-30 10:16:48,128][00929] Num frames 1700... |
|
[2023-08-30 10:16:48,335][00929] Num frames 1800... |
|
[2023-08-30 10:16:48,541][00929] Num frames 1900... |
|
[2023-08-30 10:16:48,637][00929] Avg episode rewards: #0: 20.600, true rewards: #0: 9.600 |
|
[2023-08-30 10:16:48,639][00929] Avg episode reward: 20.600, avg true_objective: 9.600 |
|
[2023-08-30 10:16:48,806][00929] Num frames 2000... |
|
[2023-08-30 10:16:49,010][00929] Num frames 2100... |
|
[2023-08-30 10:16:49,207][00929] Num frames 2200... |
|
[2023-08-30 10:16:49,428][00929] Num frames 2300... |
|
[2023-08-30 10:16:49,655][00929] Num frames 2400... |
|
[2023-08-30 10:16:49,865][00929] Num frames 2500... |
|
[2023-08-30 10:16:50,057][00929] Num frames 2600... |
|
[2023-08-30 10:16:50,251][00929] Num frames 2700... |
|
[2023-08-30 10:16:50,453][00929] Num frames 2800... |
|
[2023-08-30 10:16:50,650][00929] Num frames 2900... |
|
[2023-08-30 10:16:50,844][00929] Num frames 3000... |
|
[2023-08-30 10:16:50,977][00929] Num frames 3100... |
|
[2023-08-30 10:16:51,079][00929] Avg episode rewards: #0: 24.454, true rewards: #0: 10.453 |
|
[2023-08-30 10:16:51,080][00929] Avg episode reward: 24.454, avg true_objective: 10.453 |
|
[2023-08-30 10:16:51,174][00929] Num frames 3200... |
|
[2023-08-30 10:16:51,317][00929] Num frames 3300... |
|
[2023-08-30 10:16:51,448][00929] Num frames 3400... |
|
[2023-08-30 10:16:51,590][00929] Num frames 3500... |
|
[2023-08-30 10:16:51,717][00929] Num frames 3600... |
|
[2023-08-30 10:16:51,851][00929] Num frames 3700... |
|
[2023-08-30 10:16:51,987][00929] Num frames 3800... |
|
[2023-08-30 10:16:52,096][00929] Avg episode rewards: #0: 21.850, true rewards: #0: 9.600 |
|
[2023-08-30 10:16:52,098][00929] Avg episode reward: 21.850, avg true_objective: 9.600 |
|
[2023-08-30 10:16:52,197][00929] Num frames 3900... |
|
[2023-08-30 10:16:52,351][00929] Num frames 4000... |
|
[2023-08-30 10:16:52,497][00929] Num frames 4100... |
|
[2023-08-30 10:16:52,644][00929] Num frames 4200... |
|
[2023-08-30 10:16:52,777][00929] Num frames 4300... |
|
[2023-08-30 10:16:52,910][00929] Num frames 4400... |
|
[2023-08-30 10:16:53,046][00929] Num frames 4500... |
|
[2023-08-30 10:16:53,179][00929] Num frames 4600... |
|
[2023-08-30 10:16:53,315][00929] Num frames 4700... |
|
[2023-08-30 10:16:53,462][00929] Num frames 4800... |
|
[2023-08-30 10:16:53,606][00929] Num frames 4900... |
|
[2023-08-30 10:16:53,743][00929] Num frames 5000... |
|
[2023-08-30 10:16:53,876][00929] Num frames 5100... |
|
[2023-08-30 10:16:54,008][00929] Num frames 5200... |
|
[2023-08-30 10:16:54,144][00929] Num frames 5300... |
|
[2023-08-30 10:16:54,277][00929] Num frames 5400... |
|
[2023-08-30 10:16:54,417][00929] Num frames 5500... |
|
[2023-08-30 10:16:54,548][00929] Avg episode rewards: #0: 25.702, true rewards: #0: 11.102 |
|
[2023-08-30 10:16:54,551][00929] Avg episode reward: 25.702, avg true_objective: 11.102 |
|
[2023-08-30 10:16:54,630][00929] Num frames 5600... |
|
[2023-08-30 10:16:54,759][00929] Num frames 5700... |
|
[2023-08-30 10:16:54,891][00929] Num frames 5800... |
|
[2023-08-30 10:16:55,028][00929] Num frames 5900... |
|
[2023-08-30 10:16:55,159][00929] Num frames 6000... |
|
[2023-08-30 10:16:55,322][00929] Avg episode rewards: #0: 23.288, true rewards: #0: 10.122 |
|
[2023-08-30 10:16:55,324][00929] Avg episode reward: 23.288, avg true_objective: 10.122 |
|
[2023-08-30 10:16:55,368][00929] Num frames 6100... |
|
[2023-08-30 10:16:55,508][00929] Num frames 6200... |
|
[2023-08-30 10:16:55,651][00929] Num frames 6300... |
|
[2023-08-30 10:16:55,786][00929] Num frames 6400... |
|
[2023-08-30 10:16:55,919][00929] Num frames 6500... |
|
[2023-08-30 10:16:56,049][00929] Num frames 6600... |
|
[2023-08-30 10:16:56,186][00929] Num frames 6700... |
|
[2023-08-30 10:16:56,321][00929] Num frames 6800... |
|
[2023-08-30 10:16:56,393][00929] Avg episode rewards: #0: 21.870, true rewards: #0: 9.727 |
|
[2023-08-30 10:16:56,396][00929] Avg episode reward: 21.870, avg true_objective: 9.727 |
|
[2023-08-30 10:16:56,533][00929] Num frames 6900... |
|
[2023-08-30 10:16:56,677][00929] Num frames 7000... |
|
[2023-08-30 10:16:56,811][00929] Num frames 7100... |
|
[2023-08-30 10:16:56,943][00929] Num frames 7200... |
|
[2023-08-30 10:16:57,080][00929] Num frames 7300... |
|
[2023-08-30 10:16:57,221][00929] Num frames 7400... |
|
[2023-08-30 10:16:57,358][00929] Num frames 7500... |
|
[2023-08-30 10:16:57,434][00929] Avg episode rewards: #0: 20.641, true rewards: #0: 9.391 |
|
[2023-08-30 10:16:57,437][00929] Avg episode reward: 20.641, avg true_objective: 9.391 |
|
[2023-08-30 10:16:57,552][00929] Num frames 7600... |
|
[2023-08-30 10:16:57,742][00929] Avg episode rewards: #0: 18.551, true rewards: #0: 8.551 |
|
[2023-08-30 10:16:57,744][00929] Avg episode reward: 18.551, avg true_objective: 8.551 |
|
[2023-08-30 10:16:57,756][00929] Num frames 7700... |
|
[2023-08-30 10:16:57,888][00929] Num frames 7800... |
|
[2023-08-30 10:16:58,021][00929] Num frames 7900... |
|
[2023-08-30 10:16:58,162][00929] Num frames 8000... |
|
[2023-08-30 10:16:58,307][00929] Num frames 8100... |
|
[2023-08-30 10:16:58,443][00929] Num frames 8200... |
|
[2023-08-30 10:16:58,579][00929] Num frames 8300... |
|
[2023-08-30 10:16:58,721][00929] Num frames 8400... |
|
[2023-08-30 10:16:58,789][00929] Avg episode rewards: #0: 18.008, true rewards: #0: 8.408 |
|
[2023-08-30 10:16:58,791][00929] Avg episode reward: 18.008, avg true_objective: 8.408 |
|
[2023-08-30 10:17:57,751][00929] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|