|
[2024-07-24 10:07:50,442][00224] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-07-24 10:07:50,445][00224] Rollout worker 0 uses device cpu |
|
[2024-07-24 10:07:50,447][00224] Rollout worker 1 uses device cpu |
|
[2024-07-24 10:07:50,448][00224] Rollout worker 2 uses device cpu |
|
[2024-07-24 10:07:50,449][00224] Rollout worker 3 uses device cpu |
|
[2024-07-24 10:07:50,450][00224] Rollout worker 4 uses device cpu |
|
[2024-07-24 10:07:50,451][00224] Rollout worker 5 uses device cpu |
|
[2024-07-24 10:07:50,453][00224] Rollout worker 6 uses device cpu |
|
[2024-07-24 10:07:50,454][00224] Rollout worker 7 uses device cpu |
|
[2024-07-24 10:07:50,695][00224] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-24 10:07:50,696][00224] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-07-24 10:07:50,748][00224] Starting all processes... |
|
[2024-07-24 10:07:50,753][00224] Starting process learner_proc0 |
|
[2024-07-24 10:07:53,168][00224] Starting all processes... |
|
[2024-07-24 10:07:53,219][00224] Starting process inference_proc0-0 |
|
[2024-07-24 10:07:53,220][00224] Starting process rollout_proc0 |
|
[2024-07-24 10:07:53,220][00224] Starting process rollout_proc1 |
|
[2024-07-24 10:07:53,221][00224] Starting process rollout_proc2 |
|
[2024-07-24 10:07:53,222][00224] Starting process rollout_proc3 |
|
[2024-07-24 10:07:53,222][00224] Starting process rollout_proc4 |
|
[2024-07-24 10:07:53,222][00224] Starting process rollout_proc5 |
|
[2024-07-24 10:07:53,222][00224] Starting process rollout_proc6 |
|
[2024-07-24 10:07:53,222][00224] Starting process rollout_proc7 |
|
[2024-07-24 10:08:08,507][01056] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-24 10:08:08,507][01056] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-07-24 10:08:08,638][01056] Num visible devices: 1 |
|
[2024-07-24 10:08:08,694][01056] Starting seed is not provided |
|
[2024-07-24 10:08:08,694][01056] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-24 10:08:08,694][01056] Initializing actor-critic model on device cuda:0 |
|
[2024-07-24 10:08:08,695][01056] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-24 10:08:08,698][01056] RunningMeanStd input shape: (1,) |
|
[2024-07-24 10:08:08,848][01056] ConvEncoder: input_channels=3 |
|
[2024-07-24 10:08:08,996][01071] Worker 1 uses CPU cores [1] |
|
[2024-07-24 10:08:09,212][01073] Worker 3 uses CPU cores [1] |
|
[2024-07-24 10:08:09,449][01076] Worker 6 uses CPU cores [0] |
|
[2024-07-24 10:08:09,530][01077] Worker 7 uses CPU cores [1] |
|
[2024-07-24 10:08:09,542][01070] Worker 0 uses CPU cores [0] |
|
[2024-07-24 10:08:09,653][01074] Worker 4 uses CPU cores [0] |
|
[2024-07-24 10:08:09,693][01075] Worker 5 uses CPU cores [1] |
|
[2024-07-24 10:08:09,710][01072] Worker 2 uses CPU cores [0] |
|
[2024-07-24 10:08:09,721][01069] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-24 10:08:09,721][01069] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-07-24 10:08:09,756][01069] Num visible devices: 1 |
|
[2024-07-24 10:08:09,775][01056] Conv encoder output size: 512 |
|
[2024-07-24 10:08:09,776][01056] Policy head output size: 512 |
|
[2024-07-24 10:08:09,840][01056] Created Actor Critic model with architecture: |
|
[2024-07-24 10:08:09,840][01056] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-07-24 10:08:10,180][01056] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-07-24 10:08:10,689][00224] Heartbeat connected on Batcher_0 |
|
[2024-07-24 10:08:10,695][00224] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-07-24 10:08:10,711][00224] Heartbeat connected on RolloutWorker_w0 |
|
[2024-07-24 10:08:10,720][00224] Heartbeat connected on RolloutWorker_w1 |
|
[2024-07-24 10:08:10,723][00224] Heartbeat connected on RolloutWorker_w2 |
|
[2024-07-24 10:08:10,734][00224] Heartbeat connected on RolloutWorker_w3 |
|
[2024-07-24 10:08:10,737][00224] Heartbeat connected on RolloutWorker_w4 |
|
[2024-07-24 10:08:10,739][00224] Heartbeat connected on RolloutWorker_w5 |
|
[2024-07-24 10:08:10,743][00224] Heartbeat connected on RolloutWorker_w6 |
|
[2024-07-24 10:08:10,749][00224] Heartbeat connected on RolloutWorker_w7 |
|
[2024-07-24 10:08:11,300][01056] No checkpoints found |
|
[2024-07-24 10:08:11,300][01056] Did not load from checkpoint, starting from scratch! |
|
[2024-07-24 10:08:11,301][01056] Initialized policy 0 weights for model version 0 |
|
[2024-07-24 10:08:11,303][01056] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-07-24 10:08:11,342][01056] LearnerWorker_p0 finished initialization! |
|
[2024-07-24 10:08:11,343][00224] Heartbeat connected on LearnerWorker_p0 |
|
[2024-07-24 10:08:11,524][01069] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-24 10:08:11,526][01069] RunningMeanStd input shape: (1,) |
|
[2024-07-24 10:08:11,538][01069] ConvEncoder: input_channels=3 |
|
[2024-07-24 10:08:11,644][01069] Conv encoder output size: 512 |
|
[2024-07-24 10:08:11,645][01069] Policy head output size: 512 |
|
[2024-07-24 10:08:11,697][00224] Inference worker 0-0 is ready! |
|
[2024-07-24 10:08:11,699][00224] All inference workers are ready! Signal rollout workers to start! |
|
[2024-07-24 10:08:11,956][01072] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:11,958][01076] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,010][01074] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,024][01075] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,071][01077] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,123][01071] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,132][01070] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:12,146][01073] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:08:13,050][01072] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:13,049][01070] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:13,392][01077] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:13,397][01075] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:13,413][01071] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:13,483][01070] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:14,101][00224] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-24 10:08:14,308][01072] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:14,398][01076] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:14,500][01077] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:14,519][01071] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:14,528][01073] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:15,501][01076] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:15,756][01070] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:15,763][01072] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:15,768][01075] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:15,786][01073] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:16,226][01077] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:16,572][01071] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:17,069][01074] Decorrelating experience for 0 frames... |
|
[2024-07-24 10:08:17,358][01075] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:17,397][01071] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:17,443][01072] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:17,492][01076] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:17,772][01070] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:18,333][01074] Decorrelating experience for 32 frames... |
|
[2024-07-24 10:08:18,601][01075] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:18,637][01073] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:19,101][00224] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 3.2. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-24 10:08:20,351][01077] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:20,556][01073] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:21,228][01074] Decorrelating experience for 64 frames... |
|
[2024-07-24 10:08:21,478][01076] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:24,101][00224] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 241.6. Samples: 2416. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-07-24 10:08:24,103][00224] Avg episode reward: [(0, '3.105')] |
|
[2024-07-24 10:08:25,331][01056] Signal inference workers to stop experience collection... |
|
[2024-07-24 10:08:25,349][01069] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-07-24 10:08:25,486][01074] Decorrelating experience for 96 frames... |
|
[2024-07-24 10:08:27,168][01056] Signal inference workers to resume experience collection... |
|
[2024-07-24 10:08:27,169][01069] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-07-24 10:08:29,101][00224] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 199.3. Samples: 2990. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-07-24 10:08:29,105][00224] Avg episode reward: [(0, '3.334')] |
|
[2024-07-24 10:08:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 1843.2, 300 sec: 1843.2). Total num frames: 36864. Throughput: 0: 414.9. Samples: 8298. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:08:34,103][00224] Avg episode reward: [(0, '3.928')] |
|
[2024-07-24 10:08:35,027][01069] Updated weights for policy 0, policy_version 10 (0.0028) |
|
[2024-07-24 10:08:39,103][00224] Fps is (10 sec: 4095.3, 60 sec: 2129.8, 300 sec: 2129.8). Total num frames: 53248. Throughput: 0: 551.8. Samples: 13796. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:08:39,107][00224] Avg episode reward: [(0, '4.380')] |
|
[2024-07-24 10:08:44,101][00224] Fps is (10 sec: 2867.2, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 526.5. Samples: 15794. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:08:44,104][00224] Avg episode reward: [(0, '4.566')] |
|
[2024-07-24 10:08:47,576][01069] Updated weights for policy 0, policy_version 20 (0.0037) |
|
[2024-07-24 10:08:49,101][00224] Fps is (10 sec: 3277.4, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 86016. Throughput: 0: 606.9. Samples: 21240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:08:49,103][00224] Avg episode reward: [(0, '4.548')] |
|
[2024-07-24 10:08:54,101][00224] Fps is (10 sec: 4096.2, 60 sec: 2662.4, 300 sec: 2662.4). Total num frames: 106496. Throughput: 0: 680.2. Samples: 27208. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:08:54,103][00224] Avg episode reward: [(0, '4.452')] |
|
[2024-07-24 10:08:54,118][01056] Saving new best policy, reward=4.452! |
|
[2024-07-24 10:08:59,102][00224] Fps is (10 sec: 3276.4, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 657.1. Samples: 29572. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:08:59,110][00224] Avg episode reward: [(0, '4.461')] |
|
[2024-07-24 10:08:59,114][01056] Saving new best policy, reward=4.461! |
|
[2024-07-24 10:08:59,429][01069] Updated weights for policy 0, policy_version 30 (0.0052) |
|
[2024-07-24 10:09:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 2785.3, 300 sec: 2785.3). Total num frames: 139264. Throughput: 0: 753.9. Samples: 33940. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:09:04,103][00224] Avg episode reward: [(0, '4.397')] |
|
[2024-07-24 10:09:09,101][00224] Fps is (10 sec: 4096.6, 60 sec: 2904.4, 300 sec: 2904.4). Total num frames: 159744. Throughput: 0: 846.2. Samples: 40496. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:09:09,106][00224] Avg episode reward: [(0, '4.493')] |
|
[2024-07-24 10:09:09,110][01056] Saving new best policy, reward=4.493! |
|
[2024-07-24 10:09:09,578][01069] Updated weights for policy 0, policy_version 40 (0.0047) |
|
[2024-07-24 10:09:14,102][00224] Fps is (10 sec: 4095.4, 60 sec: 3003.7, 300 sec: 3003.7). Total num frames: 180224. Throughput: 0: 906.2. Samples: 43768. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:09:14,105][00224] Avg episode reward: [(0, '4.459')] |
|
[2024-07-24 10:09:19,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3208.5, 300 sec: 2961.7). Total num frames: 192512. Throughput: 0: 886.1. Samples: 48170. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:09:19,110][00224] Avg episode reward: [(0, '4.590')] |
|
[2024-07-24 10:09:19,112][01056] Saving new best policy, reward=4.590! |
|
[2024-07-24 10:09:21,647][01069] Updated weights for policy 0, policy_version 50 (0.0036) |
|
[2024-07-24 10:09:24,101][00224] Fps is (10 sec: 3277.3, 60 sec: 3549.9, 300 sec: 3042.7). Total num frames: 212992. Throughput: 0: 898.2. Samples: 54214. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:09:24,103][00224] Avg episode reward: [(0, '4.542')] |
|
[2024-07-24 10:09:29,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3167.6). Total num frames: 237568. Throughput: 0: 929.2. Samples: 57606. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:09:29,103][00224] Avg episode reward: [(0, '4.264')] |
|
[2024-07-24 10:09:31,093][01069] Updated weights for policy 0, policy_version 60 (0.0031) |
|
[2024-07-24 10:09:34,103][00224] Fps is (10 sec: 3685.5, 60 sec: 3549.7, 300 sec: 3123.1). Total num frames: 249856. Throughput: 0: 926.2. Samples: 62922. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:09:34,113][00224] Avg episode reward: [(0, '4.415')] |
|
[2024-07-24 10:09:39,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3180.4). Total num frames: 270336. Throughput: 0: 908.6. Samples: 68094. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:09:39,105][00224] Avg episode reward: [(0, '4.379')] |
|
[2024-07-24 10:09:42,484][01069] Updated weights for policy 0, policy_version 70 (0.0021) |
|
[2024-07-24 10:09:44,101][00224] Fps is (10 sec: 4097.0, 60 sec: 3754.7, 300 sec: 3231.3). Total num frames: 290816. Throughput: 0: 928.6. Samples: 71356. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:09:44,105][00224] Avg episode reward: [(0, '4.326')] |
|
[2024-07-24 10:09:44,113][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth... |
|
[2024-07-24 10:09:49,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 311296. Throughput: 0: 965.2. Samples: 77372. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:09:49,106][00224] Avg episode reward: [(0, '4.500')] |
|
[2024-07-24 10:09:54,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3235.8). Total num frames: 323584. Throughput: 0: 905.8. Samples: 81256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:09:54,106][00224] Avg episode reward: [(0, '4.644')] |
|
[2024-07-24 10:09:54,116][01056] Saving new best policy, reward=4.644! |
|
[2024-07-24 10:09:55,094][01069] Updated weights for policy 0, policy_version 80 (0.0026) |
|
[2024-07-24 10:09:59,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3276.8). Total num frames: 344064. Throughput: 0: 901.8. Samples: 84348. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:09:59,107][00224] Avg episode reward: [(0, '4.514')] |
|
[2024-07-24 10:10:04,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3314.0). Total num frames: 364544. Throughput: 0: 949.7. Samples: 90908. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:10:04,105][00224] Avg episode reward: [(0, '4.415')] |
|
[2024-07-24 10:10:05,131][01069] Updated weights for policy 0, policy_version 90 (0.0026) |
|
[2024-07-24 10:10:09,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3276.8). Total num frames: 376832. Throughput: 0: 908.4. Samples: 95094. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:10:09,106][00224] Avg episode reward: [(0, '4.438')] |
|
[2024-07-24 10:10:14,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3310.9). Total num frames: 397312. Throughput: 0: 888.3. Samples: 97580. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:10:14,107][00224] Avg episode reward: [(0, '4.381')] |
|
[2024-07-24 10:10:16,533][01069] Updated weights for policy 0, policy_version 100 (0.0029) |
|
[2024-07-24 10:10:19,101][00224] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3342.3). Total num frames: 417792. Throughput: 0: 916.6. Samples: 104166. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:10:19,105][00224] Avg episode reward: [(0, '4.477')] |
|
[2024-07-24 10:10:24,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3339.8). Total num frames: 434176. Throughput: 0: 920.8. Samples: 109530. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:10:24,107][00224] Avg episode reward: [(0, '4.542')] |
|
[2024-07-24 10:10:28,960][01069] Updated weights for policy 0, policy_version 110 (0.0024) |
|
[2024-07-24 10:10:29,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3337.5). Total num frames: 450560. Throughput: 0: 890.3. Samples: 111418. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:10:29,104][00224] Avg episode reward: [(0, '4.564')] |
|
[2024-07-24 10:10:34,101][00224] Fps is (10 sec: 3686.5, 60 sec: 3686.5, 300 sec: 3364.6). Total num frames: 471040. Throughput: 0: 888.1. Samples: 117338. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:10:34,103][00224] Avg episode reward: [(0, '4.363')] |
|
[2024-07-24 10:10:38,862][01069] Updated weights for policy 0, policy_version 120 (0.0023) |
|
[2024-07-24 10:10:39,102][00224] Fps is (10 sec: 4095.4, 60 sec: 3686.3, 300 sec: 3389.8). Total num frames: 491520. Throughput: 0: 936.1. Samples: 123380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:10:39,104][00224] Avg episode reward: [(0, '4.430')] |
|
[2024-07-24 10:10:44,106][00224] Fps is (10 sec: 3275.0, 60 sec: 3549.5, 300 sec: 3358.6). Total num frames: 503808. Throughput: 0: 910.1. Samples: 125308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:10:44,109][00224] Avg episode reward: [(0, '4.474')] |
|
[2024-07-24 10:10:49,101][00224] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3382.5). Total num frames: 524288. Throughput: 0: 880.4. Samples: 130526. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:10:49,108][00224] Avg episode reward: [(0, '4.468')] |
|
[2024-07-24 10:10:50,601][01069] Updated weights for policy 0, policy_version 130 (0.0016) |
|
[2024-07-24 10:10:54,101][00224] Fps is (10 sec: 4098.2, 60 sec: 3686.4, 300 sec: 3404.8). Total num frames: 544768. Throughput: 0: 934.4. Samples: 137142. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:10:54,108][00224] Avg episode reward: [(0, '4.597')] |
|
[2024-07-24 10:10:59,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3400.9). Total num frames: 561152. Throughput: 0: 939.8. Samples: 139870. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:10:59,105][00224] Avg episode reward: [(0, '4.567')] |
|
[2024-07-24 10:11:02,428][01069] Updated weights for policy 0, policy_version 140 (0.0030) |
|
[2024-07-24 10:11:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3397.3). Total num frames: 577536. Throughput: 0: 889.5. Samples: 144194. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:11:04,104][00224] Avg episode reward: [(0, '4.474')] |
|
[2024-07-24 10:11:09,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3440.6). Total num frames: 602112. Throughput: 0: 916.9. Samples: 150792. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:11:09,103][00224] Avg episode reward: [(0, '4.570')] |
|
[2024-07-24 10:11:11,708][01069] Updated weights for policy 0, policy_version 150 (0.0014) |
|
[2024-07-24 10:11:14,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3754.7, 300 sec: 3458.8). Total num frames: 622592. Throughput: 0: 949.1. Samples: 154126. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:11:14,103][00224] Avg episode reward: [(0, '4.428')] |
|
[2024-07-24 10:11:19,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3431.8). Total num frames: 634880. Throughput: 0: 916.5. Samples: 158580. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:11:19,103][00224] Avg episode reward: [(0, '4.517')] |
|
[2024-07-24 10:11:23,551][01069] Updated weights for policy 0, policy_version 160 (0.0028) |
|
[2024-07-24 10:11:24,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3449.3). Total num frames: 655360. Throughput: 0: 915.2. Samples: 164562. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:11:24,107][00224] Avg episode reward: [(0, '4.434')] |
|
[2024-07-24 10:11:29,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3465.8). Total num frames: 675840. Throughput: 0: 945.1. Samples: 167832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:11:29,110][00224] Avg episode reward: [(0, '4.547')] |
|
[2024-07-24 10:11:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3461.1). Total num frames: 692224. Throughput: 0: 948.8. Samples: 173220. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:11:34,103][00224] Avg episode reward: [(0, '4.657')] |
|
[2024-07-24 10:11:34,118][01056] Saving new best policy, reward=4.657! |
|
[2024-07-24 10:11:34,820][01069] Updated weights for policy 0, policy_version 170 (0.0036) |
|
[2024-07-24 10:11:39,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3456.6). Total num frames: 708608. Throughput: 0: 910.4. Samples: 178110. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:11:39,103][00224] Avg episode reward: [(0, '4.685')] |
|
[2024-07-24 10:11:39,107][01056] Saving new best policy, reward=4.685! |
|
[2024-07-24 10:11:44,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3823.3, 300 sec: 3491.4). Total num frames: 733184. Throughput: 0: 919.8. Samples: 181260. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:11:44,103][00224] Avg episode reward: [(0, '4.623')] |
|
[2024-07-24 10:11:44,116][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth... |
|
[2024-07-24 10:11:45,023][01069] Updated weights for policy 0, policy_version 180 (0.0037) |
|
[2024-07-24 10:11:49,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3486.4). Total num frames: 749568. Throughput: 0: 959.5. Samples: 187370. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:11:49,106][00224] Avg episode reward: [(0, '4.506')] |
|
[2024-07-24 10:11:54,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3481.6). Total num frames: 765952. Throughput: 0: 905.2. Samples: 191526. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:11:54,105][00224] Avg episode reward: [(0, '4.540')] |
|
[2024-07-24 10:11:57,027][01069] Updated weights for policy 0, policy_version 190 (0.0020) |
|
[2024-07-24 10:11:59,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3754.7, 300 sec: 3495.2). Total num frames: 786432. Throughput: 0: 904.6. Samples: 194832. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:11:59,103][00224] Avg episode reward: [(0, '4.711')] |
|
[2024-07-24 10:11:59,110][01056] Saving new best policy, reward=4.711! |
|
[2024-07-24 10:12:04,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3508.3). Total num frames: 806912. Throughput: 0: 952.5. Samples: 201442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:12:04,107][00224] Avg episode reward: [(0, '4.687')] |
|
[2024-07-24 10:12:07,753][01069] Updated weights for policy 0, policy_version 200 (0.0021) |
|
[2024-07-24 10:12:09,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3486.0). Total num frames: 819200. Throughput: 0: 921.6. Samples: 206034. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:12:09,109][00224] Avg episode reward: [(0, '4.796')] |
|
[2024-07-24 10:12:09,137][01056] Saving new best policy, reward=4.796! |
|
[2024-07-24 10:12:14,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3498.7). Total num frames: 839680. Throughput: 0: 898.1. Samples: 208246. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:12:14,106][00224] Avg episode reward: [(0, '4.720')] |
|
[2024-07-24 10:12:18,339][01069] Updated weights for policy 0, policy_version 210 (0.0049) |
|
[2024-07-24 10:12:19,101][00224] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3527.6). Total num frames: 864256. Throughput: 0: 928.0. Samples: 214982. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:12:19,108][00224] Avg episode reward: [(0, '4.691')] |
|
[2024-07-24 10:12:24,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3522.6). Total num frames: 880640. Throughput: 0: 943.2. Samples: 220552. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:12:24,103][00224] Avg episode reward: [(0, '4.823')] |
|
[2024-07-24 10:12:24,116][01056] Saving new best policy, reward=4.823! |
|
[2024-07-24 10:12:29,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3501.7). Total num frames: 892928. Throughput: 0: 917.3. Samples: 222538. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:12:29,104][00224] Avg episode reward: [(0, '4.761')] |
|
[2024-07-24 10:12:30,161][01069] Updated weights for policy 0, policy_version 220 (0.0027) |
|
[2024-07-24 10:12:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3528.9). Total num frames: 917504. Throughput: 0: 915.6. Samples: 228570. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:12:34,102][00224] Avg episode reward: [(0, '4.576')] |
|
[2024-07-24 10:12:39,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3539.6). Total num frames: 937984. Throughput: 0: 968.3. Samples: 235098. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:12:39,102][00224] Avg episode reward: [(0, '4.603')] |
|
[2024-07-24 10:12:40,136][01069] Updated weights for policy 0, policy_version 230 (0.0029) |
|
[2024-07-24 10:12:44,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3519.5). Total num frames: 950272. Throughput: 0: 938.4. Samples: 237062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:12:44,104][00224] Avg episode reward: [(0, '4.475')] |
|
[2024-07-24 10:12:49,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3530.0). Total num frames: 970752. Throughput: 0: 904.5. Samples: 242144. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:12:49,105][00224] Avg episode reward: [(0, '4.611')] |
|
[2024-07-24 10:12:51,536][01069] Updated weights for policy 0, policy_version 240 (0.0027) |
|
[2024-07-24 10:12:54,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3754.7, 300 sec: 3540.1). Total num frames: 991232. Throughput: 0: 949.5. Samples: 248760. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:12:54,107][00224] Avg episode reward: [(0, '4.685')] |
|
[2024-07-24 10:12:59,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3535.5). Total num frames: 1007616. Throughput: 0: 965.0. Samples: 251670. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:12:59,105][00224] Avg episode reward: [(0, '4.817')] |
|
[2024-07-24 10:13:03,435][01069] Updated weights for policy 0, policy_version 250 (0.0026) |
|
[2024-07-24 10:13:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3531.0). Total num frames: 1024000. Throughput: 0: 907.9. Samples: 255836. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:13:04,107][00224] Avg episode reward: [(0, '4.996')] |
|
[2024-07-24 10:13:04,118][01056] Saving new best policy, reward=4.996! |
|
[2024-07-24 10:13:09,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3554.5). Total num frames: 1048576. Throughput: 0: 930.5. Samples: 262424. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:13:09,102][00224] Avg episode reward: [(0, '4.771')] |
|
[2024-07-24 10:13:12,794][01069] Updated weights for policy 0, policy_version 260 (0.0027) |
|
[2024-07-24 10:13:14,105][00224] Fps is (10 sec: 4503.6, 60 sec: 3822.7, 300 sec: 3623.9). Total num frames: 1069056. Throughput: 0: 956.8. Samples: 265600. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:13:14,110][00224] Avg episode reward: [(0, '4.721')] |
|
[2024-07-24 10:13:19,106][00224] Fps is (10 sec: 3275.0, 60 sec: 3617.8, 300 sec: 3665.5). Total num frames: 1081344. Throughput: 0: 925.3. Samples: 270212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:13:19,108][00224] Avg episode reward: [(0, '4.881')] |
|
[2024-07-24 10:13:24,101][00224] Fps is (10 sec: 3278.3, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1101824. Throughput: 0: 908.9. Samples: 276000. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:13:24,107][00224] Avg episode reward: [(0, '4.873')] |
|
[2024-07-24 10:13:24,796][01069] Updated weights for policy 0, policy_version 270 (0.0049) |
|
[2024-07-24 10:13:29,103][00224] Fps is (10 sec: 4097.4, 60 sec: 3822.8, 300 sec: 3679.4). Total num frames: 1122304. Throughput: 0: 938.4. Samples: 279290. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:13:29,110][00224] Avg episode reward: [(0, '4.843')] |
|
[2024-07-24 10:13:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1138688. Throughput: 0: 949.2. Samples: 284856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:13:34,105][00224] Avg episode reward: [(0, '4.919')] |
|
[2024-07-24 10:13:36,373][01069] Updated weights for policy 0, policy_version 280 (0.0015) |
|
[2024-07-24 10:13:39,101][00224] Fps is (10 sec: 3277.4, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1155072. Throughput: 0: 911.2. Samples: 289766. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:13:39,107][00224] Avg episode reward: [(0, '5.084')] |
|
[2024-07-24 10:13:39,189][01056] Saving new best policy, reward=5.084! |
|
[2024-07-24 10:13:44,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1179648. Throughput: 0: 915.5. Samples: 292868. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:13:44,106][00224] Avg episode reward: [(0, '5.259')] |
|
[2024-07-24 10:13:44,116][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000288_1179648.pth... |
|
[2024-07-24 10:13:44,241][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000071_290816.pth |
|
[2024-07-24 10:13:44,261][01056] Saving new best policy, reward=5.259! |
|
[2024-07-24 10:13:46,077][01069] Updated weights for policy 0, policy_version 290 (0.0032) |
|
[2024-07-24 10:13:49,103][00224] Fps is (10 sec: 4095.0, 60 sec: 3754.5, 300 sec: 3693.3). Total num frames: 1196032. Throughput: 0: 956.9. Samples: 298900. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:13:49,106][00224] Avg episode reward: [(0, '5.301')] |
|
[2024-07-24 10:13:49,111][01056] Saving new best policy, reward=5.301! |
|
[2024-07-24 10:13:54,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3693.4). Total num frames: 1208320. Throughput: 0: 901.2. Samples: 302976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:13:54,103][00224] Avg episode reward: [(0, '5.192')] |
|
[2024-07-24 10:13:57,829][01069] Updated weights for policy 0, policy_version 300 (0.0038) |
|
[2024-07-24 10:13:59,101][00224] Fps is (10 sec: 3687.3, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1232896. Throughput: 0: 905.8. Samples: 306358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:13:59,103][00224] Avg episode reward: [(0, '5.408')] |
|
[2024-07-24 10:13:59,108][01056] Saving new best policy, reward=5.408! |
|
[2024-07-24 10:14:04,103][00224] Fps is (10 sec: 4504.5, 60 sec: 3822.8, 300 sec: 3707.2). Total num frames: 1253376. Throughput: 0: 949.3. Samples: 312928. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:14:04,105][00224] Avg episode reward: [(0, '5.938')] |
|
[2024-07-24 10:14:04,117][01056] Saving new best policy, reward=5.938! |
|
[2024-07-24 10:14:09,104][00224] Fps is (10 sec: 3275.7, 60 sec: 3617.9, 300 sec: 3679.4). Total num frames: 1265664. Throughput: 0: 919.8. Samples: 317392. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:14:09,110][00224] Avg episode reward: [(0, '6.067')] |
|
[2024-07-24 10:14:09,112][01056] Saving new best policy, reward=6.067! |
|
[2024-07-24 10:14:09,564][01069] Updated weights for policy 0, policy_version 310 (0.0016) |
|
[2024-07-24 10:14:14,101][00224] Fps is (10 sec: 3277.4, 60 sec: 3618.4, 300 sec: 3707.2). Total num frames: 1286144. Throughput: 0: 897.6. Samples: 319680. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:14:14,104][00224] Avg episode reward: [(0, '5.733')] |
|
[2024-07-24 10:14:19,101][00224] Fps is (10 sec: 4097.4, 60 sec: 3755.0, 300 sec: 3707.2). Total num frames: 1306624. Throughput: 0: 924.2. Samples: 326446. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:14:19,105][00224] Avg episode reward: [(0, '5.662')] |
|
[2024-07-24 10:14:19,219][01069] Updated weights for policy 0, policy_version 320 (0.0017) |
|
[2024-07-24 10:14:24,101][00224] Fps is (10 sec: 4096.2, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 1327104. Throughput: 0: 939.0. Samples: 332020. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:14:24,106][00224] Avg episode reward: [(0, '5.605')] |
|
[2024-07-24 10:14:29,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3693.4). Total num frames: 1339392. Throughput: 0: 916.6. Samples: 334114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:14:29,103][00224] Avg episode reward: [(0, '5.619')] |
|
[2024-07-24 10:14:31,071][01069] Updated weights for policy 0, policy_version 330 (0.0034) |
|
[2024-07-24 10:14:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1363968. Throughput: 0: 922.3. Samples: 340400. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:14:34,104][00224] Avg episode reward: [(0, '5.644')] |
|
[2024-07-24 10:14:39,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1384448. Throughput: 0: 974.2. Samples: 346816. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:14:39,103][00224] Avg episode reward: [(0, '5.503')] |
|
[2024-07-24 10:14:41,503][01069] Updated weights for policy 0, policy_version 340 (0.0048) |
|
[2024-07-24 10:14:44,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3679.5). Total num frames: 1396736. Throughput: 0: 943.8. Samples: 348828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:14:44,104][00224] Avg episode reward: [(0, '5.400')] |
|
[2024-07-24 10:14:49,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.6, 300 sec: 3707.2). Total num frames: 1417216. Throughput: 0: 911.3. Samples: 353936. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:14:49,103][00224] Avg episode reward: [(0, '5.413')] |
|
[2024-07-24 10:14:52,070][01069] Updated weights for policy 0, policy_version 350 (0.0020) |
|
[2024-07-24 10:14:54,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 1441792. Throughput: 0: 963.0. Samples: 360724. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:14:54,102][00224] Avg episode reward: [(0, '5.289')] |
|
[2024-07-24 10:14:59,103][00224] Fps is (10 sec: 4095.0, 60 sec: 3754.5, 300 sec: 3707.2). Total num frames: 1458176. Throughput: 0: 974.8. Samples: 363546. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:14:59,106][00224] Avg episode reward: [(0, '5.414')] |
|
[2024-07-24 10:15:03,856][01069] Updated weights for policy 0, policy_version 360 (0.0037) |
|
[2024-07-24 10:15:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.5, 300 sec: 3721.1). Total num frames: 1474560. Throughput: 0: 920.9. Samples: 367888. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:15:04,103][00224] Avg episode reward: [(0, '5.749')] |
|
[2024-07-24 10:15:09,101][00224] Fps is (10 sec: 3687.3, 60 sec: 3823.2, 300 sec: 3721.1). Total num frames: 1495040. Throughput: 0: 938.0. Samples: 374232. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:15:09,103][00224] Avg episode reward: [(0, '5.988')] |
|
[2024-07-24 10:15:14,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1511424. Throughput: 0: 958.0. Samples: 377224. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:15:14,104][00224] Avg episode reward: [(0, '6.095')] |
|
[2024-07-24 10:15:14,116][01056] Saving new best policy, reward=6.095! |
|
[2024-07-24 10:15:14,786][01069] Updated weights for policy 0, policy_version 370 (0.0023) |
|
[2024-07-24 10:15:19,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3618.1, 300 sec: 3693.3). Total num frames: 1523712. Throughput: 0: 908.2. Samples: 381270. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:15:19,104][00224] Avg episode reward: [(0, '6.153')] |
|
[2024-07-24 10:15:19,111][01056] Saving new best policy, reward=6.153! |
|
[2024-07-24 10:15:24,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3707.2). Total num frames: 1544192. Throughput: 0: 890.7. Samples: 386898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:24,103][00224] Avg episode reward: [(0, '6.083')] |
|
[2024-07-24 10:15:26,302][01069] Updated weights for policy 0, policy_version 380 (0.0015) |
|
[2024-07-24 10:15:29,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3721.1). Total num frames: 1568768. Throughput: 0: 919.0. Samples: 390182. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:29,103][00224] Avg episode reward: [(0, '6.369')] |
|
[2024-07-24 10:15:29,108][01056] Saving new best policy, reward=6.369! |
|
[2024-07-24 10:15:34,107][00224] Fps is (10 sec: 3684.1, 60 sec: 3617.8, 300 sec: 3693.3). Total num frames: 1581056. Throughput: 0: 916.9. Samples: 395202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:34,110][00224] Avg episode reward: [(0, '6.795')] |
|
[2024-07-24 10:15:34,132][01056] Saving new best policy, reward=6.795! |
|
[2024-07-24 10:15:38,542][01069] Updated weights for policy 0, policy_version 390 (0.0042) |
|
[2024-07-24 10:15:39,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3707.3). Total num frames: 1597440. Throughput: 0: 872.4. Samples: 399984. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:39,109][00224] Avg episode reward: [(0, '6.970')] |
|
[2024-07-24 10:15:39,111][01056] Saving new best policy, reward=6.970! |
|
[2024-07-24 10:15:44,101][00224] Fps is (10 sec: 3688.8, 60 sec: 3686.4, 300 sec: 3707.2). Total num frames: 1617920. Throughput: 0: 874.1. Samples: 402880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:15:44,102][00224] Avg episode reward: [(0, '7.470')] |
|
[2024-07-24 10:15:44,120][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000395_1617920.pth... |
|
[2024-07-24 10:15:44,246][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth |
|
[2024-07-24 10:15:44,257][01056] Saving new best policy, reward=7.470! |
|
[2024-07-24 10:15:49,106][00224] Fps is (10 sec: 3684.3, 60 sec: 3617.8, 300 sec: 3693.3). Total num frames: 1634304. Throughput: 0: 896.0. Samples: 408214. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:15:49,112][00224] Avg episode reward: [(0, '7.354')] |
|
[2024-07-24 10:15:50,692][01069] Updated weights for policy 0, policy_version 400 (0.0037) |
|
[2024-07-24 10:15:54,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3413.3, 300 sec: 3679.5). Total num frames: 1646592. Throughput: 0: 848.0. Samples: 412392. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:54,106][00224] Avg episode reward: [(0, '7.679')] |
|
[2024-07-24 10:15:54,115][01056] Saving new best policy, reward=7.679! |
|
[2024-07-24 10:15:59,101][00224] Fps is (10 sec: 3278.6, 60 sec: 3481.7, 300 sec: 3693.3). Total num frames: 1667072. Throughput: 0: 851.3. Samples: 415532. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:15:59,105][00224] Avg episode reward: [(0, '7.638')] |
|
[2024-07-24 10:16:01,052][01069] Updated weights for policy 0, policy_version 410 (0.0035) |
|
[2024-07-24 10:16:04,101][00224] Fps is (10 sec: 4096.1, 60 sec: 3549.9, 300 sec: 3679.5). Total num frames: 1687552. Throughput: 0: 912.0. Samples: 422308. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:16:04,108][00224] Avg episode reward: [(0, '8.186')] |
|
[2024-07-24 10:16:04,118][01056] Saving new best policy, reward=8.186! |
|
[2024-07-24 10:16:09,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3665.6). Total num frames: 1703936. Throughput: 0: 878.0. Samples: 426408. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:16:09,109][00224] Avg episode reward: [(0, '8.884')] |
|
[2024-07-24 10:16:09,112][01056] Saving new best policy, reward=8.884! |
|
[2024-07-24 10:16:13,127][01069] Updated weights for policy 0, policy_version 420 (0.0037) |
|
[2024-07-24 10:16:14,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3693.3). Total num frames: 1724416. Throughput: 0: 866.8. Samples: 429190. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:16:14,108][00224] Avg episode reward: [(0, '9.016')] |
|
[2024-07-24 10:16:14,118][01056] Saving new best policy, reward=9.016! |
|
[2024-07-24 10:16:19,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 1744896. Throughput: 0: 900.4. Samples: 435712. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:16:19,105][00224] Avg episode reward: [(0, '9.374')] |
|
[2024-07-24 10:16:19,110][01056] Saving new best policy, reward=9.374! |
|
[2024-07-24 10:16:24,019][01069] Updated weights for policy 0, policy_version 430 (0.0038) |
|
[2024-07-24 10:16:24,108][00224] Fps is (10 sec: 3683.6, 60 sec: 3617.7, 300 sec: 3679.4). Total num frames: 1761280. Throughput: 0: 904.2. Samples: 440678. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:16:24,111][00224] Avg episode reward: [(0, '9.517')] |
|
[2024-07-24 10:16:24,119][01056] Saving new best policy, reward=9.517! |
|
[2024-07-24 10:16:29,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3679.5). Total num frames: 1777664. Throughput: 0: 886.9. Samples: 442790. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:16:29,106][00224] Avg episode reward: [(0, '9.558')] |
|
[2024-07-24 10:16:29,110][01056] Saving new best policy, reward=9.558! |
|
[2024-07-24 10:16:34,101][00224] Fps is (10 sec: 3689.1, 60 sec: 3618.5, 300 sec: 3693.3). Total num frames: 1798144. Throughput: 0: 916.6. Samples: 449454. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:16:34,103][00224] Avg episode reward: [(0, '9.367')] |
|
[2024-07-24 10:16:34,133][01069] Updated weights for policy 0, policy_version 440 (0.0025) |
|
[2024-07-24 10:16:39,102][00224] Fps is (10 sec: 4095.4, 60 sec: 3686.3, 300 sec: 3679.4). Total num frames: 1818624. Throughput: 0: 952.0. Samples: 455232. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:16:39,109][00224] Avg episode reward: [(0, '9.106')] |
|
[2024-07-24 10:16:44,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 1830912. Throughput: 0: 929.0. Samples: 457336. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:16:44,107][00224] Avg episode reward: [(0, '9.193')] |
|
[2024-07-24 10:16:46,088][01069] Updated weights for policy 0, policy_version 450 (0.0025) |
|
[2024-07-24 10:16:49,101][00224] Fps is (10 sec: 3686.9, 60 sec: 3686.7, 300 sec: 3693.3). Total num frames: 1855488. Throughput: 0: 913.5. Samples: 463416. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:16:49,102][00224] Avg episode reward: [(0, '9.577')] |
|
[2024-07-24 10:16:49,113][01056] Saving new best policy, reward=9.577! |
|
[2024-07-24 10:16:54,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 1875968. Throughput: 0: 968.2. Samples: 469976. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:16:54,103][00224] Avg episode reward: [(0, '10.892')] |
|
[2024-07-24 10:16:54,112][01056] Saving new best policy, reward=10.892! |
|
[2024-07-24 10:16:56,303][01069] Updated weights for policy 0, policy_version 460 (0.0019) |
|
[2024-07-24 10:16:59,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 1888256. Throughput: 0: 950.4. Samples: 471960. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:16:59,108][00224] Avg episode reward: [(0, '10.668')] |
|
[2024-07-24 10:17:04,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 1912832. Throughput: 0: 924.4. Samples: 477310. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:17:04,111][00224] Avg episode reward: [(0, '10.985')] |
|
[2024-07-24 10:17:04,122][01056] Saving new best policy, reward=10.985! |
|
[2024-07-24 10:17:06,775][01069] Updated weights for policy 0, policy_version 470 (0.0019) |
|
[2024-07-24 10:17:09,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 1933312. Throughput: 0: 962.5. Samples: 483982. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:09,108][00224] Avg episode reward: [(0, '10.220')] |
|
[2024-07-24 10:17:14,104][00224] Fps is (10 sec: 3685.4, 60 sec: 3754.5, 300 sec: 3679.4). Total num frames: 1949696. Throughput: 0: 976.4. Samples: 486732. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:14,109][00224] Avg episode reward: [(0, '10.535')] |
|
[2024-07-24 10:17:18,568][01069] Updated weights for policy 0, policy_version 480 (0.0019) |
|
[2024-07-24 10:17:19,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 1966080. Throughput: 0: 923.2. Samples: 490996. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:19,105][00224] Avg episode reward: [(0, '10.780')] |
|
[2024-07-24 10:17:24,101][00224] Fps is (10 sec: 4097.1, 60 sec: 3823.4, 300 sec: 3721.1). Total num frames: 1990656. Throughput: 0: 946.3. Samples: 497812. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:24,111][00224] Avg episode reward: [(0, '12.027')] |
|
[2024-07-24 10:17:24,121][01056] Saving new best policy, reward=12.027! |
|
[2024-07-24 10:17:28,198][01069] Updated weights for policy 0, policy_version 490 (0.0017) |
|
[2024-07-24 10:17:29,101][00224] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 2007040. Throughput: 0: 971.7. Samples: 501062. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:29,109][00224] Avg episode reward: [(0, '12.519')] |
|
[2024-07-24 10:17:29,112][01056] Saving new best policy, reward=12.519! |
|
[2024-07-24 10:17:34,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2019328. Throughput: 0: 929.9. Samples: 505260. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:34,103][00224] Avg episode reward: [(0, '13.374')] |
|
[2024-07-24 10:17:34,118][01056] Saving new best policy, reward=13.374! |
|
[2024-07-24 10:17:39,101][00224] Fps is (10 sec: 3277.0, 60 sec: 3686.5, 300 sec: 3693.3). Total num frames: 2039808. Throughput: 0: 911.8. Samples: 511008. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:17:39,107][00224] Avg episode reward: [(0, '14.815')] |
|
[2024-07-24 10:17:39,182][01056] Saving new best policy, reward=14.815! |
|
[2024-07-24 10:17:40,119][01069] Updated weights for policy 0, policy_version 500 (0.0020) |
|
[2024-07-24 10:17:44,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2064384. Throughput: 0: 940.9. Samples: 514300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:17:44,103][00224] Avg episode reward: [(0, '15.357')] |
|
[2024-07-24 10:17:44,114][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000504_2064384.pth... |
|
[2024-07-24 10:17:44,265][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000288_1179648.pth |
|
[2024-07-24 10:17:44,288][01056] Saving new best policy, reward=15.357! |
|
[2024-07-24 10:17:49,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2076672. Throughput: 0: 931.5. Samples: 519226. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:17:49,107][00224] Avg episode reward: [(0, '15.759')] |
|
[2024-07-24 10:17:49,109][01056] Saving new best policy, reward=15.759! |
|
[2024-07-24 10:17:52,231][01069] Updated weights for policy 0, policy_version 510 (0.0031) |
|
[2024-07-24 10:17:54,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2097152. Throughput: 0: 900.8. Samples: 524520. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:17:54,109][00224] Avg episode reward: [(0, '15.928')] |
|
[2024-07-24 10:17:54,122][01056] Saving new best policy, reward=15.928! |
|
[2024-07-24 10:17:59,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 2117632. Throughput: 0: 914.2. Samples: 527870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:17:59,103][00224] Avg episode reward: [(0, '15.047')] |
|
[2024-07-24 10:18:01,365][01069] Updated weights for policy 0, policy_version 520 (0.0015) |
|
[2024-07-24 10:18:04,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2134016. Throughput: 0: 952.7. Samples: 533870. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:18:04,108][00224] Avg episode reward: [(0, '14.890')] |
|
[2024-07-24 10:18:09,103][00224] Fps is (10 sec: 3275.9, 60 sec: 3618.0, 300 sec: 3665.6). Total num frames: 2150400. Throughput: 0: 894.8. Samples: 538080. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:18:09,105][00224] Avg episode reward: [(0, '14.388')] |
|
[2024-07-24 10:18:13,638][01069] Updated weights for policy 0, policy_version 530 (0.0019) |
|
[2024-07-24 10:18:14,101][00224] Fps is (10 sec: 3686.6, 60 sec: 3686.6, 300 sec: 3693.4). Total num frames: 2170880. Throughput: 0: 894.2. Samples: 541300. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:18:14,105][00224] Avg episode reward: [(0, '14.548')] |
|
[2024-07-24 10:18:19,101][00224] Fps is (10 sec: 4097.1, 60 sec: 3754.7, 300 sec: 3693.3). Total num frames: 2191360. Throughput: 0: 940.7. Samples: 547592. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:18:19,104][00224] Avg episode reward: [(0, '15.585')] |
|
[2024-07-24 10:18:24,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2203648. Throughput: 0: 907.9. Samples: 551864. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:18:24,103][00224] Avg episode reward: [(0, '15.433')] |
|
[2024-07-24 10:18:25,706][01069] Updated weights for policy 0, policy_version 540 (0.0030) |
|
[2024-07-24 10:18:29,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3679.5). Total num frames: 2224128. Throughput: 0: 897.2. Samples: 554676. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:18:29,108][00224] Avg episode reward: [(0, '15.736')] |
|
[2024-07-24 10:18:34,101][00224] Fps is (10 sec: 4505.7, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 2248704. Throughput: 0: 937.7. Samples: 561422. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:18:34,107][00224] Avg episode reward: [(0, '16.297')] |
|
[2024-07-24 10:18:34,121][01056] Saving new best policy, reward=16.297! |
|
[2024-07-24 10:18:35,061][01069] Updated weights for policy 0, policy_version 550 (0.0024) |
|
[2024-07-24 10:18:39,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3665.6). Total num frames: 2260992. Throughput: 0: 929.6. Samples: 566352. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:18:39,106][00224] Avg episode reward: [(0, '16.403')] |
|
[2024-07-24 10:18:39,113][01056] Saving new best policy, reward=16.403! |
|
[2024-07-24 10:18:44,101][00224] Fps is (10 sec: 2867.2, 60 sec: 3549.9, 300 sec: 3665.6). Total num frames: 2277376. Throughput: 0: 898.7. Samples: 568310. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:18:44,105][00224] Avg episode reward: [(0, '16.414')] |
|
[2024-07-24 10:18:44,116][01056] Saving new best policy, reward=16.414! |
|
[2024-07-24 10:18:47,270][01069] Updated weights for policy 0, policy_version 560 (0.0018) |
|
[2024-07-24 10:18:49,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2297856. Throughput: 0: 905.9. Samples: 574634. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:18:49,105][00224] Avg episode reward: [(0, '16.687')] |
|
[2024-07-24 10:18:49,180][01056] Saving new best policy, reward=16.687! |
|
[2024-07-24 10:18:54,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2318336. Throughput: 0: 936.5. Samples: 580222. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:18:54,103][00224] Avg episode reward: [(0, '16.114')] |
|
[2024-07-24 10:18:58,976][01069] Updated weights for policy 0, policy_version 570 (0.0036) |
|
[2024-07-24 10:18:59,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3665.6). Total num frames: 2334720. Throughput: 0: 912.0. Samples: 582338. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:18:59,104][00224] Avg episode reward: [(0, '16.261')] |
|
[2024-07-24 10:19:04,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3686.4, 300 sec: 3693.4). Total num frames: 2355200. Throughput: 0: 911.8. Samples: 588622. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:19:04,106][00224] Avg episode reward: [(0, '15.687')] |
|
[2024-07-24 10:19:07,904][01069] Updated weights for policy 0, policy_version 580 (0.0023) |
|
[2024-07-24 10:19:09,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3823.1, 300 sec: 3707.2). Total num frames: 2379776. Throughput: 0: 966.7. Samples: 595366. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:19:09,104][00224] Avg episode reward: [(0, '15.950')] |
|
[2024-07-24 10:19:14,101][00224] Fps is (10 sec: 3686.5, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2392064. Throughput: 0: 951.4. Samples: 597488. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:19:14,103][00224] Avg episode reward: [(0, '16.158')] |
|
[2024-07-24 10:19:19,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2412544. Throughput: 0: 918.3. Samples: 602744. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:19:19,105][00224] Avg episode reward: [(0, '17.624')] |
|
[2024-07-24 10:19:19,109][01056] Saving new best policy, reward=17.624! |
|
[2024-07-24 10:19:19,563][01069] Updated weights for policy 0, policy_version 590 (0.0028) |
|
[2024-07-24 10:19:24,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3721.1). Total num frames: 2437120. Throughput: 0: 960.3. Samples: 609566. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:19:24,107][00224] Avg episode reward: [(0, '17.611')] |
|
[2024-07-24 10:19:29,101][00224] Fps is (10 sec: 4095.8, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 2453504. Throughput: 0: 979.4. Samples: 612384. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:19:29,108][00224] Avg episode reward: [(0, '18.145')] |
|
[2024-07-24 10:19:29,110][01056] Saving new best policy, reward=18.145! |
|
[2024-07-24 10:19:30,397][01069] Updated weights for policy 0, policy_version 600 (0.0023) |
|
[2024-07-24 10:19:34,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3679.5). Total num frames: 2469888. Throughput: 0: 938.7. Samples: 616876. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:19:34,103][00224] Avg episode reward: [(0, '18.237')] |
|
[2024-07-24 10:19:34,121][01056] Saving new best policy, reward=18.237! |
|
[2024-07-24 10:19:39,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3707.2). Total num frames: 2490368. Throughput: 0: 964.3. Samples: 623614. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:19:39,106][00224] Avg episode reward: [(0, '17.590')] |
|
[2024-07-24 10:19:40,228][01069] Updated weights for policy 0, policy_version 610 (0.0030) |
|
[2024-07-24 10:19:44,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3707.2). Total num frames: 2510848. Throughput: 0: 992.8. Samples: 627012. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:19:44,107][00224] Avg episode reward: [(0, '17.003')] |
|
[2024-07-24 10:19:44,121][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000613_2510848.pth... |
|
[2024-07-24 10:19:44,343][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000395_1617920.pth |
|
[2024-07-24 10:19:49,101][00224] Fps is (10 sec: 3276.9, 60 sec: 3754.7, 300 sec: 3665.6). Total num frames: 2523136. Throughput: 0: 948.6. Samples: 631308. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:19:49,104][00224] Avg episode reward: [(0, '17.661')] |
|
[2024-07-24 10:19:51,933][01069] Updated weights for policy 0, policy_version 620 (0.0029) |
|
[2024-07-24 10:19:54,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3693.4). Total num frames: 2547712. Throughput: 0: 937.6. Samples: 637556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:19:54,105][00224] Avg episode reward: [(0, '17.483')] |
|
[2024-07-24 10:19:59,101][00224] Fps is (10 sec: 4915.2, 60 sec: 3959.5, 300 sec: 3721.1). Total num frames: 2572288. Throughput: 0: 966.4. Samples: 640976. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:19:59,105][00224] Avg episode reward: [(0, '16.862')] |
|
[2024-07-24 10:20:01,715][01069] Updated weights for policy 0, policy_version 630 (0.0025) |
|
[2024-07-24 10:20:04,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3693.3). Total num frames: 2584576. Throughput: 0: 966.8. Samples: 646252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:20:04,103][00224] Avg episode reward: [(0, '16.369')] |
|
[2024-07-24 10:20:09,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3707.2). Total num frames: 2605056. Throughput: 0: 934.3. Samples: 651608. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:20:09,103][00224] Avg episode reward: [(0, '16.426')] |
|
[2024-07-24 10:20:12,476][01069] Updated weights for policy 0, policy_version 640 (0.0017) |
|
[2024-07-24 10:20:14,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3735.0). Total num frames: 2625536. Throughput: 0: 945.7. Samples: 654938. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:20:14,103][00224] Avg episode reward: [(0, '16.413')] |
|
[2024-07-24 10:20:19,106][00224] Fps is (10 sec: 3684.4, 60 sec: 3822.6, 300 sec: 3721.0). Total num frames: 2641920. Throughput: 0: 977.2. Samples: 660856. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:20:19,108][00224] Avg episode reward: [(0, '16.901')] |
|
[2024-07-24 10:20:24,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3686.4, 300 sec: 3693.3). Total num frames: 2658304. Throughput: 0: 932.8. Samples: 665590. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:20:24,103][00224] Avg episode reward: [(0, '17.551')] |
|
[2024-07-24 10:20:24,334][01069] Updated weights for policy 0, policy_version 650 (0.0022) |
|
[2024-07-24 10:20:29,101][00224] Fps is (10 sec: 4098.2, 60 sec: 3823.0, 300 sec: 3735.1). Total num frames: 2682880. Throughput: 0: 932.8. Samples: 668986. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:20:29,103][00224] Avg episode reward: [(0, '17.812')] |
|
[2024-07-24 10:20:33,198][01069] Updated weights for policy 0, policy_version 660 (0.0040) |
|
[2024-07-24 10:20:34,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3748.9). Total num frames: 2703360. Throughput: 0: 991.9. Samples: 675942. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:20:34,103][00224] Avg episode reward: [(0, '18.222')] |
|
[2024-07-24 10:20:39,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3735.0). Total num frames: 2719744. Throughput: 0: 947.7. Samples: 680202. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:20:39,103][00224] Avg episode reward: [(0, '19.449')] |
|
[2024-07-24 10:20:39,105][01056] Saving new best policy, reward=19.449! |
|
[2024-07-24 10:20:44,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3749.0). Total num frames: 2740224. Throughput: 0: 941.1. Samples: 683324. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:20:44,103][00224] Avg episode reward: [(0, '19.793')] |
|
[2024-07-24 10:20:44,117][01056] Saving new best policy, reward=19.793! |
|
[2024-07-24 10:20:44,659][01069] Updated weights for policy 0, policy_version 670 (0.0026) |
|
[2024-07-24 10:20:49,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3776.7). Total num frames: 2760704. Throughput: 0: 970.8. Samples: 689936. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:20:49,103][00224] Avg episode reward: [(0, '19.668')] |
|
[2024-07-24 10:20:54,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3762.8). Total num frames: 2777088. Throughput: 0: 960.9. Samples: 694850. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:20:54,103][00224] Avg episode reward: [(0, '20.381')] |
|
[2024-07-24 10:20:54,118][01056] Saving new best policy, reward=20.381! |
|
[2024-07-24 10:20:56,308][01069] Updated weights for policy 0, policy_version 680 (0.0024) |
|
[2024-07-24 10:20:59,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2797568. Throughput: 0: 941.0. Samples: 697284. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:20:59,104][00224] Avg episode reward: [(0, '20.423')] |
|
[2024-07-24 10:20:59,113][01056] Saving new best policy, reward=20.423! |
|
[2024-07-24 10:21:04,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 2818048. Throughput: 0: 962.3. Samples: 704154. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:21:04,108][00224] Avg episode reward: [(0, '20.259')] |
|
[2024-07-24 10:21:05,187][01069] Updated weights for policy 0, policy_version 690 (0.0018) |
|
[2024-07-24 10:21:09,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3776.7). Total num frames: 2838528. Throughput: 0: 986.2. Samples: 709970. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:21:09,102][00224] Avg episode reward: [(0, '19.660')] |
|
[2024-07-24 10:21:14,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3748.9). Total num frames: 2850816. Throughput: 0: 957.5. Samples: 712074. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:21:14,103][00224] Avg episode reward: [(0, '20.044')] |
|
[2024-07-24 10:21:16,924][01069] Updated weights for policy 0, policy_version 700 (0.0039) |
|
[2024-07-24 10:21:19,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3891.6, 300 sec: 3776.7). Total num frames: 2875392. Throughput: 0: 941.3. Samples: 718300. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:21:19,106][00224] Avg episode reward: [(0, '19.794')] |
|
[2024-07-24 10:21:24,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3790.5). Total num frames: 2895872. Throughput: 0: 990.7. Samples: 724784. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:21:24,109][00224] Avg episode reward: [(0, '20.647')] |
|
[2024-07-24 10:21:24,119][01056] Saving new best policy, reward=20.647! |
|
[2024-07-24 10:21:27,927][01069] Updated weights for policy 0, policy_version 710 (0.0018) |
|
[2024-07-24 10:21:29,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2908160. Throughput: 0: 964.8. Samples: 726740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:21:29,103][00224] Avg episode reward: [(0, '20.808')] |
|
[2024-07-24 10:21:29,113][01056] Saving new best policy, reward=20.808! |
|
[2024-07-24 10:21:34,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 2932736. Throughput: 0: 942.0. Samples: 732328. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:21:34,108][00224] Avg episode reward: [(0, '21.284')] |
|
[2024-07-24 10:21:34,125][01056] Saving new best policy, reward=21.284! |
|
[2024-07-24 10:21:37,673][01069] Updated weights for policy 0, policy_version 720 (0.0028) |
|
[2024-07-24 10:21:39,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 2953216. Throughput: 0: 983.2. Samples: 739094. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:21:39,108][00224] Avg episode reward: [(0, '20.530')] |
|
[2024-07-24 10:21:44,102][00224] Fps is (10 sec: 3686.1, 60 sec: 3822.9, 300 sec: 3776.6). Total num frames: 2969600. Throughput: 0: 982.7. Samples: 741508. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:21:44,107][00224] Avg episode reward: [(0, '20.099')] |
|
[2024-07-24 10:21:44,124][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000725_2969600.pth... |
|
[2024-07-24 10:21:44,311][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000504_2064384.pth |
|
[2024-07-24 10:21:49,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 2985984. Throughput: 0: 935.1. Samples: 746232. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:21:49,103][00224] Avg episode reward: [(0, '18.030')] |
|
[2024-07-24 10:21:49,312][01069] Updated weights for policy 0, policy_version 730 (0.0026) |
|
[2024-07-24 10:21:54,101][00224] Fps is (10 sec: 4096.3, 60 sec: 3891.2, 300 sec: 3804.4). Total num frames: 3010560. Throughput: 0: 960.0. Samples: 753168. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:21:54,106][00224] Avg episode reward: [(0, '18.192')] |
|
[2024-07-24 10:21:59,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3776.7). Total num frames: 3026944. Throughput: 0: 986.1. Samples: 756448. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:21:59,111][00224] Avg episode reward: [(0, '17.727')] |
|
[2024-07-24 10:21:59,400][01069] Updated weights for policy 0, policy_version 740 (0.0028) |
|
[2024-07-24 10:22:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 3043328. Throughput: 0: 943.0. Samples: 760736. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:22:04,106][00224] Avg episode reward: [(0, '18.686')] |
|
[2024-07-24 10:22:09,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3822.9, 300 sec: 3790.6). Total num frames: 3067904. Throughput: 0: 947.3. Samples: 767414. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:22:09,106][00224] Avg episode reward: [(0, '19.688')] |
|
[2024-07-24 10:22:09,549][01069] Updated weights for policy 0, policy_version 750 (0.0031) |
|
[2024-07-24 10:22:14,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3804.4). Total num frames: 3088384. Throughput: 0: 981.2. Samples: 770896. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:22:14,107][00224] Avg episode reward: [(0, '20.773')] |
|
[2024-07-24 10:22:19,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3754.7, 300 sec: 3762.8). Total num frames: 3100672. Throughput: 0: 965.4. Samples: 775770. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:22:19,106][00224] Avg episode reward: [(0, '19.987')] |
|
[2024-07-24 10:22:21,370][01069] Updated weights for policy 0, policy_version 760 (0.0034) |
|
[2024-07-24 10:22:24,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 3125248. Throughput: 0: 945.9. Samples: 781658. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:22:24,103][00224] Avg episode reward: [(0, '20.080')] |
|
[2024-07-24 10:22:29,104][00224] Fps is (10 sec: 4504.2, 60 sec: 3959.2, 300 sec: 3818.3). Total num frames: 3145728. Throughput: 0: 967.5. Samples: 785050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:22:29,106][00224] Avg episode reward: [(0, '20.355')] |
|
[2024-07-24 10:22:30,128][01069] Updated weights for policy 0, policy_version 770 (0.0029) |
|
[2024-07-24 10:22:34,104][00224] Fps is (10 sec: 3685.1, 60 sec: 3822.7, 300 sec: 3804.4). Total num frames: 3162112. Throughput: 0: 990.3. Samples: 790798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:22:34,106][00224] Avg episode reward: [(0, '21.014')] |
|
[2024-07-24 10:22:39,101][00224] Fps is (10 sec: 3687.6, 60 sec: 3822.9, 300 sec: 3790.5). Total num frames: 3182592. Throughput: 0: 951.3. Samples: 795976. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:22:39,103][00224] Avg episode reward: [(0, '20.559')] |
|
[2024-07-24 10:22:41,588][01069] Updated weights for policy 0, policy_version 780 (0.0025) |
|
[2024-07-24 10:22:44,101][00224] Fps is (10 sec: 4097.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3203072. Throughput: 0: 954.4. Samples: 799398. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:22:44,103][00224] Avg episode reward: [(0, '22.280')] |
|
[2024-07-24 10:22:44,118][01056] Saving new best policy, reward=22.280! |
|
[2024-07-24 10:22:49,101][00224] Fps is (10 sec: 4096.1, 60 sec: 3959.5, 300 sec: 3818.3). Total num frames: 3223552. Throughput: 0: 995.9. Samples: 805550. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:22:49,107][00224] Avg episode reward: [(0, '21.868')] |
|
[2024-07-24 10:22:53,278][01069] Updated weights for policy 0, policy_version 790 (0.0015) |
|
[2024-07-24 10:22:54,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3790.5). Total num frames: 3235840. Throughput: 0: 944.8. Samples: 809930. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:22:54,103][00224] Avg episode reward: [(0, '22.883')] |
|
[2024-07-24 10:22:54,117][01056] Saving new best policy, reward=22.883! |
|
[2024-07-24 10:22:59,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3891.2, 300 sec: 3818.3). Total num frames: 3260416. Throughput: 0: 941.4. Samples: 813258. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:22:59,102][00224] Avg episode reward: [(0, '22.099')] |
|
[2024-07-24 10:23:02,579][01069] Updated weights for policy 0, policy_version 800 (0.0041) |
|
[2024-07-24 10:23:04,102][00224] Fps is (10 sec: 4505.1, 60 sec: 3959.4, 300 sec: 3832.2). Total num frames: 3280896. Throughput: 0: 986.8. Samples: 820176. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:23:04,107][00224] Avg episode reward: [(0, '20.855')] |
|
[2024-07-24 10:23:09,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3297280. Throughput: 0: 956.9. Samples: 824718. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:23:09,106][00224] Avg episode reward: [(0, '19.421')] |
|
[2024-07-24 10:23:13,708][01069] Updated weights for policy 0, policy_version 810 (0.0039) |
|
[2024-07-24 10:23:14,101][00224] Fps is (10 sec: 3686.8, 60 sec: 3822.9, 300 sec: 3818.3). Total num frames: 3317760. Throughput: 0: 946.6. Samples: 827644. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:23:14,107][00224] Avg episode reward: [(0, '19.640')] |
|
[2024-07-24 10:23:19,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3846.1). Total num frames: 3338240. Throughput: 0: 969.7. Samples: 834430. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:23:19,105][00224] Avg episode reward: [(0, '19.696')] |
|
[2024-07-24 10:23:24,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3354624. Throughput: 0: 970.6. Samples: 839654. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:23:24,103][00224] Avg episode reward: [(0, '20.204')] |
|
[2024-07-24 10:23:24,562][01069] Updated weights for policy 0, policy_version 820 (0.0027) |
|
[2024-07-24 10:23:29,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3823.2, 300 sec: 3818.3). Total num frames: 3375104. Throughput: 0: 942.8. Samples: 841822. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:23:29,103][00224] Avg episode reward: [(0, '20.962')] |
|
[2024-07-24 10:23:34,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3846.1). Total num frames: 3395584. Throughput: 0: 958.8. Samples: 848698. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:23:34,103][00224] Avg episode reward: [(0, '22.106')] |
|
[2024-07-24 10:23:34,393][01069] Updated weights for policy 0, policy_version 830 (0.0031) |
|
[2024-07-24 10:23:39,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3416064. Throughput: 0: 995.6. Samples: 854732. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
|
[2024-07-24 10:23:39,106][00224] Avg episode reward: [(0, '22.172')] |
|
[2024-07-24 10:23:44,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3432448. Throughput: 0: 969.1. Samples: 856866. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:23:44,109][00224] Avg episode reward: [(0, '20.999')] |
|
[2024-07-24 10:23:44,122][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000838_3432448.pth... |
|
[2024-07-24 10:23:44,246][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000613_2510848.pth |
|
[2024-07-24 10:23:45,760][01069] Updated weights for policy 0, policy_version 840 (0.0025) |
|
[2024-07-24 10:23:49,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3452928. Throughput: 0: 949.4. Samples: 862898. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:23:49,109][00224] Avg episode reward: [(0, '20.061')] |
|
[2024-07-24 10:23:54,101][00224] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3873.8). Total num frames: 3477504. Throughput: 0: 996.9. Samples: 869580. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:23:54,103][00224] Avg episode reward: [(0, '20.136')] |
|
[2024-07-24 10:23:55,474][01069] Updated weights for policy 0, policy_version 850 (0.0038) |
|
[2024-07-24 10:23:59,102][00224] Fps is (10 sec: 3685.9, 60 sec: 3822.8, 300 sec: 3846.1). Total num frames: 3489792. Throughput: 0: 979.3. Samples: 871712. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:23:59,109][00224] Avg episode reward: [(0, '20.006')] |
|
[2024-07-24 10:24:04,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3823.0, 300 sec: 3832.2). Total num frames: 3510272. Throughput: 0: 950.4. Samples: 877196. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:24:04,105][00224] Avg episode reward: [(0, '20.783')] |
|
[2024-07-24 10:24:06,167][01069] Updated weights for policy 0, policy_version 860 (0.0022) |
|
[2024-07-24 10:24:09,101][00224] Fps is (10 sec: 4506.2, 60 sec: 3959.5, 300 sec: 3873.8). Total num frames: 3534848. Throughput: 0: 988.3. Samples: 884126. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:24:09,110][00224] Avg episode reward: [(0, '21.554')] |
|
[2024-07-24 10:24:14,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3551232. Throughput: 0: 1000.8. Samples: 886858. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:24:14,107][00224] Avg episode reward: [(0, '20.616')] |
|
[2024-07-24 10:24:17,875][01069] Updated weights for policy 0, policy_version 870 (0.0024) |
|
[2024-07-24 10:24:19,101][00224] Fps is (10 sec: 3276.7, 60 sec: 3822.9, 300 sec: 3832.2). Total num frames: 3567616. Throughput: 0: 949.4. Samples: 891420. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
|
[2024-07-24 10:24:19,103][00224] Avg episode reward: [(0, '20.346')] |
|
[2024-07-24 10:24:24,101][00224] Fps is (10 sec: 4095.9, 60 sec: 3959.5, 300 sec: 3860.0). Total num frames: 3592192. Throughput: 0: 967.1. Samples: 898252. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:24:24,107][00224] Avg episode reward: [(0, '21.156')] |
|
[2024-07-24 10:24:26,634][01069] Updated weights for policy 0, policy_version 880 (0.0025) |
|
[2024-07-24 10:24:29,102][00224] Fps is (10 sec: 4095.6, 60 sec: 3891.1, 300 sec: 3859.9). Total num frames: 3608576. Throughput: 0: 994.7. Samples: 901630. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
|
[2024-07-24 10:24:29,107][00224] Avg episode reward: [(0, '20.752')] |
|
[2024-07-24 10:24:34,101][00224] Fps is (10 sec: 3276.9, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3624960. Throughput: 0: 958.6. Samples: 906034. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:24:34,108][00224] Avg episode reward: [(0, '20.779')] |
|
[2024-07-24 10:24:37,931][01069] Updated weights for policy 0, policy_version 890 (0.0020) |
|
[2024-07-24 10:24:39,101][00224] Fps is (10 sec: 4096.6, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3649536. Throughput: 0: 957.7. Samples: 912676. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:24:39,107][00224] Avg episode reward: [(0, '21.302')] |
|
[2024-07-24 10:24:44,101][00224] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3887.7). Total num frames: 3670016. Throughput: 0: 985.6. Samples: 916064. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:24:44,107][00224] Avg episode reward: [(0, '21.814')] |
|
[2024-07-24 10:24:49,048][01069] Updated weights for policy 0, policy_version 900 (0.0046) |
|
[2024-07-24 10:24:49,104][00224] Fps is (10 sec: 3685.1, 60 sec: 3891.0, 300 sec: 3859.9). Total num frames: 3686400. Throughput: 0: 976.1. Samples: 921122. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:24:49,111][00224] Avg episode reward: [(0, '21.479')] |
|
[2024-07-24 10:24:54,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3846.1). Total num frames: 3706880. Throughput: 0: 949.3. Samples: 926846. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:24:54,103][00224] Avg episode reward: [(0, '22.018')] |
|
[2024-07-24 10:24:58,436][01069] Updated weights for policy 0, policy_version 910 (0.0052) |
|
[2024-07-24 10:24:59,101][00224] Fps is (10 sec: 4097.4, 60 sec: 3959.6, 300 sec: 3873.8). Total num frames: 3727360. Throughput: 0: 964.7. Samples: 930268. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:24:59,102][00224] Avg episode reward: [(0, '22.490')] |
|
[2024-07-24 10:25:04,101][00224] Fps is (10 sec: 3686.3, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3743744. Throughput: 0: 993.2. Samples: 936114. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:25:04,109][00224] Avg episode reward: [(0, '22.849')] |
|
[2024-07-24 10:25:09,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3760128. Throughput: 0: 953.2. Samples: 941146. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-07-24 10:25:09,110][00224] Avg episode reward: [(0, '22.044')] |
|
[2024-07-24 10:25:10,147][01069] Updated weights for policy 0, policy_version 920 (0.0022) |
|
[2024-07-24 10:25:14,101][00224] Fps is (10 sec: 4096.1, 60 sec: 3891.2, 300 sec: 3873.9). Total num frames: 3784704. Throughput: 0: 954.0. Samples: 944558. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:25:14,106][00224] Avg episode reward: [(0, '20.856')] |
|
[2024-07-24 10:25:19,102][00224] Fps is (10 sec: 4504.9, 60 sec: 3959.4, 300 sec: 3887.7). Total num frames: 3805184. Throughput: 0: 1002.3. Samples: 951140. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:25:19,107][00224] Avg episode reward: [(0, '20.699')] |
|
[2024-07-24 10:25:20,078][01069] Updated weights for policy 0, policy_version 930 (0.0039) |
|
[2024-07-24 10:25:24,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3817472. Throughput: 0: 947.6. Samples: 955320. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
|
[2024-07-24 10:25:24,109][00224] Avg episode reward: [(0, '20.494')] |
|
[2024-07-24 10:25:29,101][00224] Fps is (10 sec: 3686.9, 60 sec: 3891.3, 300 sec: 3860.0). Total num frames: 3842048. Throughput: 0: 943.5. Samples: 958522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:25:29,103][00224] Avg episode reward: [(0, '21.741')] |
|
[2024-07-24 10:25:30,642][01069] Updated weights for policy 0, policy_version 940 (0.0044) |
|
[2024-07-24 10:25:34,104][00224] Fps is (10 sec: 4504.0, 60 sec: 3959.2, 300 sec: 3873.8). Total num frames: 3862528. Throughput: 0: 984.8. Samples: 965436. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0) |
|
[2024-07-24 10:25:34,110][00224] Avg episode reward: [(0, '22.021')] |
|
[2024-07-24 10:25:39,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3878912. Throughput: 0: 964.7. Samples: 970256. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:25:39,105][00224] Avg episode reward: [(0, '21.124')] |
|
[2024-07-24 10:25:42,289][01069] Updated weights for policy 0, policy_version 950 (0.0032) |
|
[2024-07-24 10:25:44,101][00224] Fps is (10 sec: 3687.7, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3899392. Throughput: 0: 945.1. Samples: 972798. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:25:44,103][00224] Avg episode reward: [(0, '20.586')] |
|
[2024-07-24 10:25:44,116][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000952_3899392.pth... |
|
[2024-07-24 10:25:44,236][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000725_2969600.pth |
|
[2024-07-24 10:25:49,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.4, 300 sec: 3873.8). Total num frames: 3919872. Throughput: 0: 966.6. Samples: 979610. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:25:49,104][00224] Avg episode reward: [(0, '20.856')] |
|
[2024-07-24 10:25:51,641][01069] Updated weights for policy 0, policy_version 960 (0.0049) |
|
[2024-07-24 10:25:54,101][00224] Fps is (10 sec: 3686.4, 60 sec: 3822.9, 300 sec: 3860.0). Total num frames: 3936256. Throughput: 0: 976.7. Samples: 985098. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:25:54,103][00224] Avg episode reward: [(0, '20.900')] |
|
[2024-07-24 10:25:59,101][00224] Fps is (10 sec: 3276.8, 60 sec: 3754.7, 300 sec: 3846.1). Total num frames: 3952640. Throughput: 0: 949.3. Samples: 987276. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
|
[2024-07-24 10:25:59,107][00224] Avg episode reward: [(0, '21.889')] |
|
[2024-07-24 10:26:02,762][01069] Updated weights for policy 0, policy_version 970 (0.0026) |
|
[2024-07-24 10:26:04,101][00224] Fps is (10 sec: 4096.0, 60 sec: 3891.2, 300 sec: 3860.0). Total num frames: 3977216. Throughput: 0: 947.3. Samples: 993766. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
|
[2024-07-24 10:26:04,108][00224] Avg episode reward: [(0, '21.784')] |
|
[2024-07-24 10:26:09,103][00224] Fps is (10 sec: 4504.5, 60 sec: 3959.3, 300 sec: 3887.7). Total num frames: 3997696. Throughput: 0: 997.9. Samples: 1000228. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
|
[2024-07-24 10:26:09,106][00224] Avg episode reward: [(0, '23.367')] |
|
[2024-07-24 10:26:09,111][01056] Saving new best policy, reward=23.367! |
|
[2024-07-24 10:26:11,331][01056] Stopping Batcher_0... |
|
[2024-07-24 10:26:11,333][01056] Loop batcher_evt_loop terminating... |
|
[2024-07-24 10:26:11,334][00224] Component Batcher_0 stopped! |
|
[2024-07-24 10:26:11,349][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-24 10:26:11,440][00224] Component RolloutWorker_w1 stopped! |
|
[2024-07-24 10:26:11,445][00224] Component RolloutWorker_w3 stopped! |
|
[2024-07-24 10:26:11,450][01073] Stopping RolloutWorker_w3... |
|
[2024-07-24 10:26:11,451][01073] Loop rollout_proc3_evt_loop terminating... |
|
[2024-07-24 10:26:11,445][01071] Stopping RolloutWorker_w1... |
|
[2024-07-24 10:26:11,478][01070] Stopping RolloutWorker_w0... |
|
[2024-07-24 10:26:11,478][01070] Loop rollout_proc0_evt_loop terminating... |
|
[2024-07-24 10:26:11,476][00224] Component RolloutWorker_w7 stopped! |
|
[2024-07-24 10:26:11,480][00224] Component RolloutWorker_w0 stopped! |
|
[2024-07-24 10:26:11,482][01077] Stopping RolloutWorker_w7... |
|
[2024-07-24 10:26:11,482][01077] Loop rollout_proc7_evt_loop terminating... |
|
[2024-07-24 10:26:11,469][01071] Loop rollout_proc1_evt_loop terminating... |
|
[2024-07-24 10:26:11,493][01069] Weights refcount: 2 0 |
|
[2024-07-24 10:26:11,504][00224] Component InferenceWorker_p0-w0 stopped! |
|
[2024-07-24 10:26:11,506][01069] Stopping InferenceWorker_p0-w0... |
|
[2024-07-24 10:26:11,512][01069] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-07-24 10:26:11,520][01072] Stopping RolloutWorker_w2... |
|
[2024-07-24 10:26:11,520][00224] Component RolloutWorker_w2 stopped! |
|
[2024-07-24 10:26:11,523][01072] Loop rollout_proc2_evt_loop terminating... |
|
[2024-07-24 10:26:11,573][00224] Component RolloutWorker_w5 stopped! |
|
[2024-07-24 10:26:11,576][01075] Stopping RolloutWorker_w5... |
|
[2024-07-24 10:26:11,576][01075] Loop rollout_proc5_evt_loop terminating... |
|
[2024-07-24 10:26:11,595][01056] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000838_3432448.pth |
|
[2024-07-24 10:26:11,601][01074] Stopping RolloutWorker_w4... |
|
[2024-07-24 10:26:11,602][01074] Loop rollout_proc4_evt_loop terminating... |
|
[2024-07-24 10:26:11,601][00224] Component RolloutWorker_w4 stopped! |
|
[2024-07-24 10:26:11,619][01056] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-24 10:26:11,646][01076] Stopping RolloutWorker_w6... |
|
[2024-07-24 10:26:11,647][01076] Loop rollout_proc6_evt_loop terminating... |
|
[2024-07-24 10:26:11,644][00224] Component RolloutWorker_w6 stopped! |
|
[2024-07-24 10:26:11,847][00224] Component LearnerWorker_p0 stopped! |
|
[2024-07-24 10:26:11,854][00224] Waiting for process learner_proc0 to stop... |
|
[2024-07-24 10:26:11,856][01056] Stopping LearnerWorker_p0... |
|
[2024-07-24 10:26:11,856][01056] Loop learner_proc0_evt_loop terminating... |
|
[2024-07-24 10:26:13,509][00224] Waiting for process inference_proc0-0 to join... |
|
[2024-07-24 10:26:13,668][00224] Waiting for process rollout_proc0 to join... |
|
[2024-07-24 10:26:15,501][00224] Waiting for process rollout_proc1 to join... |
|
[2024-07-24 10:26:15,533][00224] Waiting for process rollout_proc2 to join... |
|
[2024-07-24 10:26:15,539][00224] Waiting for process rollout_proc3 to join... |
|
[2024-07-24 10:26:15,544][00224] Waiting for process rollout_proc4 to join... |
|
[2024-07-24 10:26:15,548][00224] Waiting for process rollout_proc5 to join... |
|
[2024-07-24 10:26:15,552][00224] Waiting for process rollout_proc6 to join... |
|
[2024-07-24 10:26:15,558][00224] Waiting for process rollout_proc7 to join... |
|
[2024-07-24 10:26:15,563][00224] Batcher 0 profile tree view: |
|
batching: 28.4988, releasing_batches: 0.0265 |
|
[2024-07-24 10:26:15,564][00224] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0000 |
|
wait_policy_total: 416.1414 |
|
update_model: 9.1309 |
|
weight_update: 0.0030 |
|
one_step: 0.0186 |
|
handle_policy_step: 609.2127 |
|
deserialize: 15.8377, stack: 3.2613, obs_to_device_normalize: 125.3933, forward: 323.6455, send_messages: 29.4422 |
|
prepare_outputs: 81.3711 |
|
to_cpu: 47.3671 |
|
[2024-07-24 10:26:15,566][00224] Learner 0 profile tree view: |
|
misc: 0.0079, prepare_batch: 14.2028 |
|
train: 75.4877 |
|
epoch_init: 0.0120, minibatch_init: 0.0109, losses_postprocess: 0.6942, kl_divergence: 0.7094, after_optimizer: 33.7991 |
|
calculate_losses: 28.4772 |
|
losses_init: 0.0038, forward_head: 1.3111, bptt_initial: 19.4025, tail: 1.1882, advantages_returns: 0.3075, losses: 3.7774 |
|
bptt: 2.1621 |
|
bptt_forward_core: 2.0399 |
|
update: 11.2370 |
|
clip: 0.9566 |
|
[2024-07-24 10:26:15,568][00224] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.2861, enqueue_policy_requests: 107.0559, env_step: 844.8170, overhead: 14.3586, complete_rollouts: 6.8662 |
|
save_policy_outputs: 19.8947 |
|
split_output_tensors: 7.8939 |
|
[2024-07-24 10:26:15,570][00224] RolloutWorker_w7 profile tree view: |
|
wait_for_trajectories: 0.3050, enqueue_policy_requests: 108.7155, env_step: 840.6951, overhead: 14.1774, complete_rollouts: 7.3154 |
|
save_policy_outputs: 20.2914 |
|
split_output_tensors: 8.2277 |
|
[2024-07-24 10:26:15,572][00224] Loop Runner_EvtLoop terminating... |
|
[2024-07-24 10:26:15,574][00224] Runner profile tree view: |
|
main_loop: 1104.8262 |
|
[2024-07-24 10:26:15,575][00224] Collected {0: 4005888}, FPS: 3625.8 |
|
[2024-07-24 10:26:15,967][00224] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-24 10:26:15,969][00224] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-24 10:26:15,970][00224] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-24 10:26:15,972][00224] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-24 10:26:15,974][00224] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-24 10:26:15,976][00224] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-24 10:26:15,977][00224] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-24 10:26:15,978][00224] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-24 10:26:15,979][00224] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-07-24 10:26:15,980][00224] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-07-24 10:26:15,981][00224] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-24 10:26:15,982][00224] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-24 10:26:15,983][00224] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-24 10:26:15,984][00224] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-24 10:26:15,986][00224] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-24 10:26:16,022][00224] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-07-24 10:26:16,026][00224] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-24 10:26:16,028][00224] RunningMeanStd input shape: (1,) |
|
[2024-07-24 10:26:16,046][00224] ConvEncoder: input_channels=3 |
|
[2024-07-24 10:26:16,160][00224] Conv encoder output size: 512 |
|
[2024-07-24 10:26:16,161][00224] Policy head output size: 512 |
|
[2024-07-24 10:26:16,334][00224] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-24 10:26:17,099][00224] Num frames 100... |
|
[2024-07-24 10:26:17,228][00224] Num frames 200... |
|
[2024-07-24 10:26:17,355][00224] Num frames 300... |
|
[2024-07-24 10:26:17,482][00224] Num frames 400... |
|
[2024-07-24 10:26:17,611][00224] Num frames 500... |
|
[2024-07-24 10:26:17,738][00224] Num frames 600... |
|
[2024-07-24 10:26:17,870][00224] Num frames 700... |
|
[2024-07-24 10:26:17,998][00224] Num frames 800... |
|
[2024-07-24 10:26:18,138][00224] Num frames 900... |
|
[2024-07-24 10:26:18,311][00224] Avg episode rewards: #0: 19.920, true rewards: #0: 9.920 |
|
[2024-07-24 10:26:18,313][00224] Avg episode reward: 19.920, avg true_objective: 9.920 |
|
[2024-07-24 10:26:18,327][00224] Num frames 1000... |
|
[2024-07-24 10:26:18,452][00224] Num frames 1100... |
|
[2024-07-24 10:26:18,578][00224] Num frames 1200... |
|
[2024-07-24 10:26:18,740][00224] Num frames 1300... |
|
[2024-07-24 10:26:18,875][00224] Num frames 1400... |
|
[2024-07-24 10:26:19,003][00224] Num frames 1500... |
|
[2024-07-24 10:26:19,138][00224] Num frames 1600... |
|
[2024-07-24 10:26:19,270][00224] Num frames 1700... |
|
[2024-07-24 10:26:19,397][00224] Num frames 1800... |
|
[2024-07-24 10:26:19,526][00224] Num frames 1900... |
|
[2024-07-24 10:26:19,653][00224] Num frames 2000... |
|
[2024-07-24 10:26:19,784][00224] Num frames 2100... |
|
[2024-07-24 10:26:19,923][00224] Num frames 2200... |
|
[2024-07-24 10:26:20,055][00224] Num frames 2300... |
|
[2024-07-24 10:26:20,193][00224] Num frames 2400... |
|
[2024-07-24 10:26:20,291][00224] Avg episode rewards: #0: 24.160, true rewards: #0: 12.160 |
|
[2024-07-24 10:26:20,293][00224] Avg episode reward: 24.160, avg true_objective: 12.160 |
|
[2024-07-24 10:26:20,380][00224] Num frames 2500... |
|
[2024-07-24 10:26:20,506][00224] Num frames 2600... |
|
[2024-07-24 10:26:20,633][00224] Num frames 2700... |
|
[2024-07-24 10:26:20,760][00224] Num frames 2800... |
|
[2024-07-24 10:26:20,893][00224] Num frames 2900... |
|
[2024-07-24 10:26:21,024][00224] Num frames 3000... |
|
[2024-07-24 10:26:21,151][00224] Num frames 3100... |
|
[2024-07-24 10:26:21,291][00224] Num frames 3200... |
|
[2024-07-24 10:26:21,419][00224] Num frames 3300... |
|
[2024-07-24 10:26:21,545][00224] Num frames 3400... |
|
[2024-07-24 10:26:21,682][00224] Num frames 3500... |
|
[2024-07-24 10:26:21,814][00224] Num frames 3600... |
|
[2024-07-24 10:26:21,947][00224] Num frames 3700... |
|
[2024-07-24 10:26:22,079][00224] Num frames 3800... |
|
[2024-07-24 10:26:22,218][00224] Num frames 3900... |
|
[2024-07-24 10:26:22,348][00224] Num frames 4000... |
|
[2024-07-24 10:26:22,475][00224] Num frames 4100... |
|
[2024-07-24 10:26:22,609][00224] Num frames 4200... |
|
[2024-07-24 10:26:22,738][00224] Num frames 4300... |
|
[2024-07-24 10:26:22,871][00224] Num frames 4400... |
|
[2024-07-24 10:26:23,000][00224] Num frames 4500... |
|
[2024-07-24 10:26:23,097][00224] Avg episode rewards: #0: 35.773, true rewards: #0: 15.107 |
|
[2024-07-24 10:26:23,098][00224] Avg episode reward: 35.773, avg true_objective: 15.107 |
|
[2024-07-24 10:26:23,188][00224] Num frames 4600... |
|
[2024-07-24 10:26:23,383][00224] Num frames 4700... |
|
[2024-07-24 10:26:23,565][00224] Num frames 4800... |
|
[2024-07-24 10:26:23,747][00224] Num frames 4900... |
|
[2024-07-24 10:26:23,936][00224] Num frames 5000... |
|
[2024-07-24 10:26:24,118][00224] Num frames 5100... |
|
[2024-07-24 10:26:24,309][00224] Num frames 5200... |
|
[2024-07-24 10:26:24,494][00224] Num frames 5300... |
|
[2024-07-24 10:26:24,679][00224] Num frames 5400... |
|
[2024-07-24 10:26:24,873][00224] Num frames 5500... |
|
[2024-07-24 10:26:25,036][00224] Avg episode rewards: #0: 33.390, true rewards: #0: 13.890 |
|
[2024-07-24 10:26:25,038][00224] Avg episode reward: 33.390, avg true_objective: 13.890 |
|
[2024-07-24 10:26:25,123][00224] Num frames 5600... |
|
[2024-07-24 10:26:25,309][00224] Num frames 5700... |
|
[2024-07-24 10:26:25,436][00224] Num frames 5800... |
|
[2024-07-24 10:26:25,567][00224] Num frames 5900... |
|
[2024-07-24 10:26:25,694][00224] Num frames 6000... |
|
[2024-07-24 10:26:25,823][00224] Num frames 6100... |
|
[2024-07-24 10:26:25,957][00224] Num frames 6200... |
|
[2024-07-24 10:26:26,087][00224] Num frames 6300... |
|
[2024-07-24 10:26:26,215][00224] Num frames 6400... |
|
[2024-07-24 10:26:26,349][00224] Num frames 6500... |
|
[2024-07-24 10:26:26,484][00224] Num frames 6600... |
|
[2024-07-24 10:26:26,626][00224] Num frames 6700... |
|
[2024-07-24 10:26:26,773][00224] Avg episode rewards: #0: 32.544, true rewards: #0: 13.544 |
|
[2024-07-24 10:26:26,776][00224] Avg episode reward: 32.544, avg true_objective: 13.544 |
|
[2024-07-24 10:26:26,814][00224] Num frames 6800... |
|
[2024-07-24 10:26:26,947][00224] Num frames 6900... |
|
[2024-07-24 10:26:27,076][00224] Num frames 7000... |
|
[2024-07-24 10:26:27,202][00224] Num frames 7100... |
|
[2024-07-24 10:26:27,330][00224] Num frames 7200... |
|
[2024-07-24 10:26:27,465][00224] Num frames 7300... |
|
[2024-07-24 10:26:27,591][00224] Num frames 7400... |
|
[2024-07-24 10:26:27,721][00224] Num frames 7500... |
|
[2024-07-24 10:26:27,852][00224] Num frames 7600... |
|
[2024-07-24 10:26:27,980][00224] Num frames 7700... |
|
[2024-07-24 10:26:28,112][00224] Num frames 7800... |
|
[2024-07-24 10:26:28,241][00224] Num frames 7900... |
|
[2024-07-24 10:26:28,375][00224] Num frames 8000... |
|
[2024-07-24 10:26:28,506][00224] Num frames 8100... |
|
[2024-07-24 10:26:28,639][00224] Num frames 8200... |
|
[2024-07-24 10:26:28,768][00224] Num frames 8300... |
|
[2024-07-24 10:26:28,844][00224] Avg episode rewards: #0: 34.023, true rewards: #0: 13.857 |
|
[2024-07-24 10:26:28,845][00224] Avg episode reward: 34.023, avg true_objective: 13.857 |
|
[2024-07-24 10:26:28,959][00224] Num frames 8400... |
|
[2024-07-24 10:26:29,090][00224] Num frames 8500... |
|
[2024-07-24 10:26:29,215][00224] Num frames 8600... |
|
[2024-07-24 10:26:29,344][00224] Num frames 8700... |
|
[2024-07-24 10:26:29,479][00224] Num frames 8800... |
|
[2024-07-24 10:26:29,605][00224] Num frames 8900... |
|
[2024-07-24 10:26:29,732][00224] Num frames 9000... |
|
[2024-07-24 10:26:29,866][00224] Num frames 9100... |
|
[2024-07-24 10:26:29,993][00224] Num frames 9200... |
|
[2024-07-24 10:26:30,150][00224] Avg episode rewards: #0: 32.248, true rewards: #0: 13.249 |
|
[2024-07-24 10:26:30,152][00224] Avg episode reward: 32.248, avg true_objective: 13.249 |
|
[2024-07-24 10:26:30,187][00224] Num frames 9300... |
|
[2024-07-24 10:26:30,311][00224] Num frames 9400... |
|
[2024-07-24 10:26:30,446][00224] Num frames 9500... |
|
[2024-07-24 10:26:30,571][00224] Num frames 9600... |
|
[2024-07-24 10:26:30,699][00224] Num frames 9700... |
|
[2024-07-24 10:26:30,825][00224] Num frames 9800... |
|
[2024-07-24 10:26:30,958][00224] Num frames 9900... |
|
[2024-07-24 10:26:31,086][00224] Num frames 10000... |
|
[2024-07-24 10:26:31,210][00224] Num frames 10100... |
|
[2024-07-24 10:26:31,275][00224] Avg episode rewards: #0: 30.132, true rewards: #0: 12.632 |
|
[2024-07-24 10:26:31,276][00224] Avg episode reward: 30.132, avg true_objective: 12.632 |
|
[2024-07-24 10:26:31,395][00224] Num frames 10200... |
|
[2024-07-24 10:26:31,530][00224] Num frames 10300... |
|
[2024-07-24 10:26:31,660][00224] Num frames 10400... |
|
[2024-07-24 10:26:31,784][00224] Num frames 10500... |
|
[2024-07-24 10:26:31,916][00224] Num frames 10600... |
|
[2024-07-24 10:26:32,045][00224] Num frames 10700... |
|
[2024-07-24 10:26:32,172][00224] Num frames 10800... |
|
[2024-07-24 10:26:32,282][00224] Avg episode rewards: #0: 28.602, true rewards: #0: 12.047 |
|
[2024-07-24 10:26:32,283][00224] Avg episode reward: 28.602, avg true_objective: 12.047 |
|
[2024-07-24 10:26:32,358][00224] Num frames 10900... |
|
[2024-07-24 10:26:32,491][00224] Num frames 11000... |
|
[2024-07-24 10:26:32,615][00224] Num frames 11100... |
|
[2024-07-24 10:26:32,741][00224] Num frames 11200... |
|
[2024-07-24 10:26:32,876][00224] Num frames 11300... |
|
[2024-07-24 10:26:33,003][00224] Num frames 11400... |
|
[2024-07-24 10:26:33,134][00224] Num frames 11500... |
|
[2024-07-24 10:26:33,259][00224] Num frames 11600... |
|
[2024-07-24 10:26:33,387][00224] Num frames 11700... |
|
[2024-07-24 10:26:33,521][00224] Num frames 11800... |
|
[2024-07-24 10:26:33,653][00224] Num frames 11900... |
|
[2024-07-24 10:26:33,779][00224] Num frames 12000... |
|
[2024-07-24 10:26:33,916][00224] Avg episode rewards: #0: 28.258, true rewards: #0: 12.058 |
|
[2024-07-24 10:26:33,917][00224] Avg episode reward: 28.258, avg true_objective: 12.058 |
|
[2024-07-24 10:27:43,359][00224] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-07-24 10:27:44,050][00224] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-24 10:27:44,052][00224] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-24 10:27:44,053][00224] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-24 10:27:44,055][00224] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-24 10:27:44,057][00224] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-24 10:27:44,058][00224] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-24 10:27:44,060][00224] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-07-24 10:27:44,061][00224] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-24 10:27:44,063][00224] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-07-24 10:27:44,064][00224] Adding new argument 'hf_repository'='nithin04/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-07-24 10:27:44,065][00224] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-24 10:27:44,066][00224] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-24 10:27:44,066][00224] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-24 10:27:44,067][00224] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-24 10:27:44,068][00224] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-24 10:27:44,109][00224] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-24 10:27:44,111][00224] RunningMeanStd input shape: (1,) |
|
[2024-07-24 10:27:44,127][00224] ConvEncoder: input_channels=3 |
|
[2024-07-24 10:27:44,183][00224] Conv encoder output size: 512 |
|
[2024-07-24 10:27:44,185][00224] Policy head output size: 512 |
|
[2024-07-24 10:27:44,211][00224] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-24 10:27:44,913][00224] Num frames 100... |
|
[2024-07-24 10:27:45,090][00224] Num frames 200... |
|
[2024-07-24 10:27:45,260][00224] Num frames 300... |
|
[2024-07-24 10:27:45,429][00224] Num frames 400... |
|
[2024-07-24 10:27:45,601][00224] Num frames 500... |
|
[2024-07-24 10:27:45,771][00224] Num frames 600... |
|
[2024-07-24 10:27:45,958][00224] Num frames 700... |
|
[2024-07-24 10:27:46,130][00224] Num frames 800... |
|
[2024-07-24 10:27:46,302][00224] Num frames 900... |
|
[2024-07-24 10:27:46,471][00224] Num frames 1000... |
|
[2024-07-24 10:27:46,647][00224] Num frames 1100... |
|
[2024-07-24 10:27:46,847][00224] Avg episode rewards: #0: 25.840, true rewards: #0: 11.840 |
|
[2024-07-24 10:27:46,849][00224] Avg episode reward: 25.840, avg true_objective: 11.840 |
|
[2024-07-24 10:27:46,879][00224] Num frames 1200... |
|
[2024-07-24 10:27:47,055][00224] Num frames 1300... |
|
[2024-07-24 10:27:47,244][00224] Num frames 1400... |
|
[2024-07-24 10:27:47,428][00224] Num frames 1500... |
|
[2024-07-24 10:27:47,609][00224] Num frames 1600... |
|
[2024-07-24 10:27:47,797][00224] Num frames 1700... |
|
[2024-07-24 10:27:47,996][00224] Num frames 1800... |
|
[2024-07-24 10:27:48,187][00224] Num frames 1900... |
|
[2024-07-24 10:27:48,378][00224] Num frames 2000... |
|
[2024-07-24 10:27:48,570][00224] Num frames 2100... |
|
[2024-07-24 10:27:48,775][00224] Num frames 2200... |
|
[2024-07-24 10:27:48,909][00224] Avg episode rewards: #0: 23.700, true rewards: #0: 11.200 |
|
[2024-07-24 10:27:48,912][00224] Avg episode reward: 23.700, avg true_objective: 11.200 |
|
[2024-07-24 10:27:49,066][00224] Num frames 2300... |
|
[2024-07-24 10:27:49,282][00224] Num frames 2400... |
|
[2024-07-24 10:27:49,517][00224] Num frames 2500... |
|
[2024-07-24 10:27:49,718][00224] Num frames 2600... |
|
[2024-07-24 10:27:49,915][00224] Num frames 2700... |
|
[2024-07-24 10:27:50,122][00224] Num frames 2800... |
|
[2024-07-24 10:27:50,339][00224] Num frames 2900... |
|
[2024-07-24 10:27:50,539][00224] Num frames 3000... |
|
[2024-07-24 10:27:50,724][00224] Num frames 3100... |
|
[2024-07-24 10:27:50,909][00224] Num frames 3200... |
|
[2024-07-24 10:27:51,117][00224] Num frames 3300... |
|
[2024-07-24 10:27:51,324][00224] Num frames 3400... |
|
[2024-07-24 10:27:51,550][00224] Num frames 3500... |
|
[2024-07-24 10:27:51,779][00224] Num frames 3600... |
|
[2024-07-24 10:27:51,877][00224] Avg episode rewards: #0: 27.720, true rewards: #0: 12.053 |
|
[2024-07-24 10:27:51,879][00224] Avg episode reward: 27.720, avg true_objective: 12.053 |
|
[2024-07-24 10:27:52,056][00224] Num frames 3700... |
|
[2024-07-24 10:27:52,239][00224] Num frames 3800... |
|
[2024-07-24 10:27:52,424][00224] Num frames 3900... |
|
[2024-07-24 10:27:52,606][00224] Num frames 4000... |
|
[2024-07-24 10:27:52,787][00224] Num frames 4100... |
|
[2024-07-24 10:27:52,976][00224] Num frames 4200... |
|
[2024-07-24 10:27:53,166][00224] Num frames 4300... |
|
[2024-07-24 10:27:53,366][00224] Num frames 4400... |
|
[2024-07-24 10:27:53,551][00224] Num frames 4500... |
|
[2024-07-24 10:27:53,741][00224] Num frames 4600... |
|
[2024-07-24 10:27:53,906][00224] Avg episode rewards: #0: 26.680, true rewards: #0: 11.680 |
|
[2024-07-24 10:27:53,908][00224] Avg episode reward: 26.680, avg true_objective: 11.680 |
|
[2024-07-24 10:27:53,950][00224] Num frames 4700... |
|
[2024-07-24 10:27:54,078][00224] Num frames 4800... |
|
[2024-07-24 10:27:54,206][00224] Num frames 4900... |
|
[2024-07-24 10:27:54,335][00224] Num frames 5000... |
|
[2024-07-24 10:27:54,472][00224] Num frames 5100... |
|
[2024-07-24 10:27:54,605][00224] Num frames 5200... |
|
[2024-07-24 10:27:54,733][00224] Num frames 5300... |
|
[2024-07-24 10:27:54,871][00224] Num frames 5400... |
|
[2024-07-24 10:27:54,975][00224] Avg episode rewards: #0: 24.676, true rewards: #0: 10.876 |
|
[2024-07-24 10:27:54,977][00224] Avg episode reward: 24.676, avg true_objective: 10.876 |
|
[2024-07-24 10:27:55,062][00224] Num frames 5500... |
|
[2024-07-24 10:27:55,188][00224] Num frames 5600... |
|
[2024-07-24 10:27:55,316][00224] Num frames 5700... |
|
[2024-07-24 10:27:55,452][00224] Num frames 5800... |
|
[2024-07-24 10:27:55,583][00224] Num frames 5900... |
|
[2024-07-24 10:27:55,715][00224] Num frames 6000... |
|
[2024-07-24 10:27:55,850][00224] Num frames 6100... |
|
[2024-07-24 10:27:55,981][00224] Num frames 6200... |
|
[2024-07-24 10:27:56,109][00224] Num frames 6300... |
|
[2024-07-24 10:27:56,239][00224] Num frames 6400... |
|
[2024-07-24 10:27:56,334][00224] Avg episode rewards: #0: 24.383, true rewards: #0: 10.717 |
|
[2024-07-24 10:27:56,336][00224] Avg episode reward: 24.383, avg true_objective: 10.717 |
|
[2024-07-24 10:27:56,436][00224] Num frames 6500... |
|
[2024-07-24 10:27:56,564][00224] Num frames 6600... |
|
[2024-07-24 10:27:56,694][00224] Num frames 6700... |
|
[2024-07-24 10:27:56,827][00224] Num frames 6800... |
|
[2024-07-24 10:27:56,960][00224] Num frames 6900... |
|
[2024-07-24 10:27:57,092][00224] Num frames 7000... |
|
[2024-07-24 10:27:57,222][00224] Num frames 7100... |
|
[2024-07-24 10:27:57,282][00224] Avg episode rewards: #0: 22.574, true rewards: #0: 10.146 |
|
[2024-07-24 10:27:57,284][00224] Avg episode reward: 22.574, avg true_objective: 10.146 |
|
[2024-07-24 10:27:57,414][00224] Num frames 7200... |
|
[2024-07-24 10:27:57,551][00224] Num frames 7300... |
|
[2024-07-24 10:27:57,684][00224] Num frames 7400... |
|
[2024-07-24 10:27:57,835][00224] Num frames 7500... |
|
[2024-07-24 10:27:57,936][00224] Avg episode rewards: #0: 20.523, true rewards: #0: 9.397 |
|
[2024-07-24 10:27:57,938][00224] Avg episode reward: 20.523, avg true_objective: 9.397 |
|
[2024-07-24 10:27:58,089][00224] Num frames 7600... |
|
[2024-07-24 10:27:58,248][00224] Num frames 7700... |
|
[2024-07-24 10:27:58,378][00224] Num frames 7800... |
|
[2024-07-24 10:27:58,519][00224] Num frames 7900... |
|
[2024-07-24 10:27:58,654][00224] Num frames 8000... |
|
[2024-07-24 10:27:58,782][00224] Num frames 8100... |
|
[2024-07-24 10:27:58,917][00224] Num frames 8200... |
|
[2024-07-24 10:27:59,047][00224] Num frames 8300... |
|
[2024-07-24 10:27:59,174][00224] Num frames 8400... |
|
[2024-07-24 10:27:59,304][00224] Num frames 8500... |
|
[2024-07-24 10:27:59,401][00224] Avg episode rewards: #0: 20.811, true rewards: #0: 9.478 |
|
[2024-07-24 10:27:59,402][00224] Avg episode reward: 20.811, avg true_objective: 9.478 |
|
[2024-07-24 10:27:59,499][00224] Num frames 8600... |
|
[2024-07-24 10:27:59,629][00224] Num frames 8700... |
|
[2024-07-24 10:27:59,758][00224] Num frames 8800... |
|
[2024-07-24 10:27:59,894][00224] Num frames 8900... |
|
[2024-07-24 10:28:00,024][00224] Num frames 9000... |
|
[2024-07-24 10:28:00,152][00224] Num frames 9100... |
|
[2024-07-24 10:28:00,280][00224] Num frames 9200... |
|
[2024-07-24 10:28:00,413][00224] Num frames 9300... |
|
[2024-07-24 10:28:00,554][00224] Num frames 9400... |
|
[2024-07-24 10:28:00,682][00224] Num frames 9500... |
|
[2024-07-24 10:28:00,812][00224] Num frames 9600... |
|
[2024-07-24 10:28:00,949][00224] Num frames 9700... |
|
[2024-07-24 10:28:01,128][00224] Avg episode rewards: #0: 21.696, true rewards: #0: 9.796 |
|
[2024-07-24 10:28:01,129][00224] Avg episode reward: 21.696, avg true_objective: 9.796 |
|
[2024-07-24 10:28:58,451][00224] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-07-24 10:29:46,527][00224] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-07-24 10:29:46,529][00224] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-07-24 10:29:46,530][00224] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-07-24 10:29:46,532][00224] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-07-24 10:29:46,534][00224] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-07-24 10:29:46,536][00224] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-07-24 10:29:46,537][00224] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-07-24 10:29:46,538][00224] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-07-24 10:29:46,540][00224] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-07-24 10:29:46,541][00224] Adding new argument 'hf_repository'='nithin04/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-07-24 10:29:46,542][00224] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-07-24 10:29:46,543][00224] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-07-24 10:29:46,544][00224] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-07-24 10:29:46,545][00224] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-07-24 10:29:46,548][00224] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-07-24 10:29:46,583][00224] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-07-24 10:29:46,585][00224] RunningMeanStd input shape: (1,) |
|
[2024-07-24 10:29:46,598][00224] ConvEncoder: input_channels=3 |
|
[2024-07-24 10:29:46,636][00224] Conv encoder output size: 512 |
|
[2024-07-24 10:29:46,637][00224] Policy head output size: 512 |
|
[2024-07-24 10:29:46,657][00224] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-07-24 10:29:47,095][00224] Num frames 100... |
|
[2024-07-24 10:29:47,221][00224] Num frames 200... |
|
[2024-07-24 10:29:47,347][00224] Num frames 300... |
|
[2024-07-24 10:29:47,484][00224] Num frames 400... |
|
[2024-07-24 10:29:47,613][00224] Num frames 500... |
|
[2024-07-24 10:29:47,742][00224] Num frames 600... |
|
[2024-07-24 10:29:47,877][00224] Num frames 700... |
|
[2024-07-24 10:29:48,008][00224] Num frames 800... |
|
[2024-07-24 10:29:48,147][00224] Avg episode rewards: #0: 16.640, true rewards: #0: 8.640 |
|
[2024-07-24 10:29:48,149][00224] Avg episode reward: 16.640, avg true_objective: 8.640 |
|
[2024-07-24 10:29:48,196][00224] Num frames 900... |
|
[2024-07-24 10:29:48,324][00224] Num frames 1000... |
|
[2024-07-24 10:29:48,459][00224] Num frames 1100... |
|
[2024-07-24 10:29:48,588][00224] Num frames 1200... |
|
[2024-07-24 10:29:48,717][00224] Num frames 1300... |
|
[2024-07-24 10:29:48,852][00224] Num frames 1400... |
|
[2024-07-24 10:29:48,980][00224] Num frames 1500... |
|
[2024-07-24 10:29:49,113][00224] Num frames 1600... |
|
[2024-07-24 10:29:49,237][00224] Num frames 1700... |
|
[2024-07-24 10:29:49,406][00224] Avg episode rewards: #0: 17.460, true rewards: #0: 8.960 |
|
[2024-07-24 10:29:49,407][00224] Avg episode reward: 17.460, avg true_objective: 8.960 |
|
[2024-07-24 10:29:49,423][00224] Num frames 1800... |
|
[2024-07-24 10:29:49,573][00224] Num frames 1900... |
|
[2024-07-24 10:29:49,703][00224] Num frames 2000... |
|
[2024-07-24 10:29:49,832][00224] Num frames 2100... |
|
[2024-07-24 10:29:49,995][00224] Num frames 2200... |
|
[2024-07-24 10:29:50,128][00224] Num frames 2300... |
|
[2024-07-24 10:29:50,229][00224] Avg episode rewards: #0: 14.120, true rewards: #0: 7.787 |
|
[2024-07-24 10:29:50,232][00224] Avg episode reward: 14.120, avg true_objective: 7.787 |
|
[2024-07-24 10:29:50,314][00224] Num frames 2400... |
|
[2024-07-24 10:29:50,439][00224] Num frames 2500... |
|
[2024-07-24 10:29:50,576][00224] Num frames 2600... |
|
[2024-07-24 10:29:50,702][00224] Num frames 2700... |
|
[2024-07-24 10:29:50,836][00224] Num frames 2800... |
|
[2024-07-24 10:29:50,966][00224] Num frames 2900... |
|
[2024-07-24 10:29:51,092][00224] Num frames 3000... |
|
[2024-07-24 10:29:51,219][00224] Num frames 3100... |
|
[2024-07-24 10:29:51,280][00224] Avg episode rewards: #0: 14.260, true rewards: #0: 7.760 |
|
[2024-07-24 10:29:51,281][00224] Avg episode reward: 14.260, avg true_objective: 7.760 |
|
[2024-07-24 10:29:51,403][00224] Num frames 3200... |
|
[2024-07-24 10:29:51,539][00224] Num frames 3300... |
|
[2024-07-24 10:29:51,669][00224] Num frames 3400... |
|
[2024-07-24 10:29:51,798][00224] Num frames 3500... |
|
[2024-07-24 10:29:51,934][00224] Num frames 3600... |
|
[2024-07-24 10:29:52,062][00224] Num frames 3700... |
|
[2024-07-24 10:29:52,188][00224] Num frames 3800... |
|
[2024-07-24 10:29:52,320][00224] Num frames 3900... |
|
[2024-07-24 10:29:52,448][00224] Num frames 4000... |
|
[2024-07-24 10:29:52,642][00224] Num frames 4100... |
|
[2024-07-24 10:29:52,826][00224] Num frames 4200... |
|
[2024-07-24 10:29:53,010][00224] Num frames 4300... |
|
[2024-07-24 10:29:53,196][00224] Num frames 4400... |
|
[2024-07-24 10:29:53,406][00224] Avg episode rewards: #0: 18.370, true rewards: #0: 8.970 |
|
[2024-07-24 10:29:53,409][00224] Avg episode reward: 18.370, avg true_objective: 8.970 |
|
[2024-07-24 10:29:53,440][00224] Num frames 4500... |
|
[2024-07-24 10:29:53,621][00224] Num frames 4600... |
|
[2024-07-24 10:29:53,808][00224] Num frames 4700... |
|
[2024-07-24 10:29:53,993][00224] Num frames 4800... |
|
[2024-07-24 10:29:54,182][00224] Num frames 4900... |
|
[2024-07-24 10:29:54,364][00224] Num frames 5000... |
|
[2024-07-24 10:29:54,552][00224] Num frames 5100... |
|
[2024-07-24 10:29:54,741][00224] Num frames 5200... |
|
[2024-07-24 10:29:54,882][00224] Avg episode rewards: #0: 18.103, true rewards: #0: 8.770 |
|
[2024-07-24 10:29:54,884][00224] Avg episode reward: 18.103, avg true_objective: 8.770 |
|
[2024-07-24 10:29:54,933][00224] Num frames 5300... |
|
[2024-07-24 10:29:55,067][00224] Num frames 5400... |
|
[2024-07-24 10:29:55,198][00224] Num frames 5500... |
|
[2024-07-24 10:29:55,325][00224] Num frames 5600... |
|
[2024-07-24 10:29:55,453][00224] Num frames 5700... |
|
[2024-07-24 10:29:55,589][00224] Num frames 5800... |
|
[2024-07-24 10:29:55,726][00224] Num frames 5900... |
|
[2024-07-24 10:29:55,860][00224] Num frames 6000... |
|
[2024-07-24 10:29:55,989][00224] Num frames 6100... |
|
[2024-07-24 10:29:56,118][00224] Num frames 6200... |
|
[2024-07-24 10:29:56,245][00224] Num frames 6300... |
|
[2024-07-24 10:29:56,325][00224] Avg episode rewards: #0: 18.883, true rewards: #0: 9.026 |
|
[2024-07-24 10:29:56,326][00224] Avg episode reward: 18.883, avg true_objective: 9.026 |
|
[2024-07-24 10:29:56,431][00224] Num frames 6400... |
|
[2024-07-24 10:29:56,560][00224] Num frames 6500... |
|
[2024-07-24 10:29:56,697][00224] Num frames 6600... |
|
[2024-07-24 10:29:56,829][00224] Num frames 6700... |
|
[2024-07-24 10:29:56,964][00224] Num frames 6800... |
|
[2024-07-24 10:29:57,094][00224] Num frames 6900... |
|
[2024-07-24 10:29:57,221][00224] Num frames 7000... |
|
[2024-07-24 10:29:57,349][00224] Num frames 7100... |
|
[2024-07-24 10:29:57,476][00224] Num frames 7200... |
|
[2024-07-24 10:29:57,604][00224] Num frames 7300... |
|
[2024-07-24 10:29:57,721][00224] Avg episode rewards: #0: 19.184, true rewards: #0: 9.184 |
|
[2024-07-24 10:29:57,723][00224] Avg episode reward: 19.184, avg true_objective: 9.184 |
|
[2024-07-24 10:29:57,797][00224] Num frames 7400... |
|
[2024-07-24 10:29:57,932][00224] Num frames 7500... |
|
[2024-07-24 10:29:58,064][00224] Num frames 7600... |
|
[2024-07-24 10:29:58,195][00224] Num frames 7700... |
|
[2024-07-24 10:29:58,323][00224] Num frames 7800... |
|
[2024-07-24 10:29:58,493][00224] Avg episode rewards: #0: 17.879, true rewards: #0: 8.768 |
|
[2024-07-24 10:29:58,495][00224] Avg episode reward: 17.879, avg true_objective: 8.768 |
|
[2024-07-24 10:29:58,510][00224] Num frames 7900... |
|
[2024-07-24 10:29:58,638][00224] Num frames 8000... |
|
[2024-07-24 10:29:58,775][00224] Num frames 8100... |
|
[2024-07-24 10:29:58,908][00224] Num frames 8200... |
|
[2024-07-24 10:29:59,037][00224] Num frames 8300... |
|
[2024-07-24 10:29:59,162][00224] Num frames 8400... |
|
[2024-07-24 10:29:59,293][00224] Num frames 8500... |
|
[2024-07-24 10:29:59,424][00224] Num frames 8600... |
|
[2024-07-24 10:29:59,553][00224] Num frames 8700... |
|
[2024-07-24 10:29:59,680][00224] Num frames 8800... |
|
[2024-07-24 10:29:59,849][00224] Num frames 8900... |
|
[2024-07-24 10:29:59,981][00224] Num frames 9000... |
|
[2024-07-24 10:30:00,111][00224] Num frames 9100... |
|
[2024-07-24 10:30:00,242][00224] Num frames 9200... |
|
[2024-07-24 10:30:00,373][00224] Num frames 9300... |
|
[2024-07-24 10:30:00,504][00224] Num frames 9400... |
|
[2024-07-24 10:30:00,630][00224] Num frames 9500... |
|
[2024-07-24 10:30:00,762][00224] Num frames 9600... |
|
[2024-07-24 10:30:00,904][00224] Num frames 9700... |
|
[2024-07-24 10:30:01,054][00224] Avg episode rewards: #0: 20.772, true rewards: #0: 9.772 |
|
[2024-07-24 10:30:01,055][00224] Avg episode reward: 20.772, avg true_objective: 9.772 |
|
[2024-07-24 10:30:56,246][00224] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|