pfcfuse/logs/log_20241006_164911.log
whaifree 5e561ab6f7 修改代码结构,提高可读性和可维护性;调整训练输出频率。
改进 self.enhancement_module 为
        self.enhancement_module = WTConv2d(32, 32)
2024-10-09 11:35:06 +08:00

31 lines
7.7 KiB
Plaintext

2.4.1+cu121
True
Model: PFCFuse
Number of epochs: 60
Epoch gap: 40
Learning rate: 0.0001
Weight decay: 0
Batch size: 1
GPU number: 0
Coefficient of MSE loss VF: 1.0
Coefficient of MSE loss IF: 1.0
Coefficient of RMI loss VF: 1.0
Coefficient of RMI loss IF: 1.0
Coefficient of Cosine loss VF: 1.0
Coefficient of Cosine loss IF: 1.0
Coefficient of Decomposition loss: 2.0
Coefficient of Total Variation loss: 5.0
Clip gradient norm value: 0.01
Optimization step: 20
Optimization gamma: 0.5
[Epoch 0/60] [Batch 0/6487] [loss: 5.934072] ETA: 10 days, 0
[Epoch 0/60] [Batch 1/6487] [loss: 7.354875] ETA: 10:34:34.9
[Epoch 0/60] [Batch 2/6487] [loss: 7.649462] ETA: 11:02:11.7
[Epoch 0/60] [Batch 3/6487] [loss: 4.681341] ETA: 9:54:34.61
[Epoch 0/60] [Batch 4/6487] [loss: 15.397819] ETA: 9:54:57.26
[Epoch 0/60] [Batch 5/6487] [loss: 11.085931] ETA: 10:22:22.5
[Epoch 0/60] [Batch 6/6487] [loss: 13.419497] ETA: 11:44:06.8
[Epoch 0/60] [Batch 7/6487] [loss: 8.841534] ETA: 10:31:40.0
[Epoch 0/60] [Batch 8/6487] [loss: 4.809514] ETA: 10:40:48.2
[Epoch 0/60] [Batch 9/6487] [loss: 5.460008] ETA: 10:35:16.4
[Epoch 0/60] [Batch 10/6487] [loss: 6.607483] ETA: 11:04:44.2
[Epoch 0/60] [Batch 11/6487] [loss: 8.002920] ETA: 10:30:11.7
[Epoch 0/60] [Batch 12/6487] [loss: 6.442471] ETA: 10:17:31.8
[Epoch 0/60] [Batch 13/6487] [loss: 12.265147] ETA: 11:30:47.2
[Epoch 0/60] [Batch 14/6487] [loss: 4.954008] ETA: 11:42:32.6
[Epoch 0/60] [Batch 15/6487] [loss: 10.585257] ETA: 10:55:33.8
[Epoch 0/60] [Batch 16/6487] [loss: 8.780766] ETA: 10:38:00.8
[Epoch 0/60] [Batch 17/6487] [loss: 8.221046] ETA: 10:50:32.7
[Epoch 0/60] [Batch 18/6487] [loss: 4.333150] ETA: 10:44:08.0
[Epoch 0/60] [Batch 19/6487] [loss: 3.702891] ETA: 11:06:37.4
[Epoch 0/60] [Batch 20/6487] [loss: 5.839406] ETA: 11:43:10.3
[Epoch 0/60] [Batch 21/6487] [loss: 3.961552] ETA: 10:05:33.6
[Epoch 0/60] [Batch 22/6487] [loss: 3.017392] ETA: 10:41:59.1
[Epoch 0/60] [Batch 23/6487] [loss: 10.637247] ETA: 10:19:07.7
[Epoch 0/60] [Batch 24/6487] [loss: 2.622610] ETA: 10:14:55.3
[Epoch 0/60] [Batch 25/6487] [loss: 4.779226] ETA: 9:54:12.09
[Epoch 0/60] [Batch 26/6487] [loss: 2.632162] ETA: 9:37:18.26
[Epoch 0/60] [Batch 27/6487] [loss: 3.904268] ETA: 9:56:16.71
[Epoch 0/60] [Batch 28/6487] [loss: 7.274397] ETA: 9:40:39.06
[Epoch 0/60] [Batch 29/6487] [loss: 2.386863] ETA: 10:30:09.7
[Epoch 0/60] [Batch 30/6487] [loss: 2.582200] ETA: 10:08:56.5
[Epoch 0/60] [Batch 31/6487] [loss: 3.013002] ETA: 10:15:19.2
[Epoch 0/60] [Batch 32/6487] [loss: 5.693004] ETA: 10:05:53.4
[Epoch 0/60] [Batch 33/6487] [loss: 2.899549] ETA: 10:03:38.5
[Epoch 0/60] [Batch 34/6487] [loss: 3.194182] ETA: 10:33:07.2
[Epoch 0/60] [Batch 35/6487] [loss: 3.742230] ETA: 10:10:17.6
[Epoch 0/60] [Batch 36/6487] [loss: 3.236870] ETA: 9:40:05.59
[Epoch 0/60] [Batch 37/6487] [loss: 2.928835] ETA: 9:37:05.77
[Epoch 0/60] [Batch 38/6487] [loss: 2.313494] ETA: 9:51:05.79
[Epoch 0/60] [Batch 39/6487] [loss: 2.723912] ETA: 9:23:56.15
[Epoch 0/60] [Batch 40/6487] [loss: 1.680832] ETA: 9:50:13.36
[Epoch 0/60] [Batch 41/6487] [loss: 1.744040] ETA: 9:30:19.47
[Epoch 0/60] [Batch 42/6487] [loss: 1.740687] ETA: 9:33:38.13
[Epoch 0/60] [Batch 43/6487] [loss: 2.356352] ETA: 9:46:33.46
[Epoch 0/60] [Batch 44/6487] [loss: 2.442476] ETA: 10:26:42.3
[Epoch 0/60] [Batch 45/6487] [loss: 1.624857] ETA: 9:38:42.11
[Epoch 0/60] [Batch 46/6487] [loss: 1.195396] ETA: 10:01:12.9
[Epoch 0/60] [Batch 47/6487] [loss: 1.149045] ETA: 10:24:53.1
[Epoch 0/60] [Batch 48/6487] [loss: 1.695918] ETA: 9:39:42.25
[Epoch 0/60] [Batch 49/6487] [loss: 2.567844] ETA: 9:28:53.50
[Epoch 0/60] [Batch 50/6487] [loss: 1.230891] ETA: 9:34:51.10
[Epoch 0/60] [Batch 51/6487] [loss: 1.958381] ETA: 10:21:43.3
[Epoch 0/60] [Batch 52/6487] [loss: 1.503905] ETA: 9:47:06.89
[Epoch 0/60] [Batch 53/6487] [loss: 2.220990] ETA: 9:38:16.26
[Epoch 0/60] [Batch 54/6487] [loss: 1.354937] ETA: 10:13:29.8
[Epoch 0/60] [Batch 55/6487] [loss: 1.741669] ETA: 9:47:41.13
[Epoch 0/60] [Batch 56/6487] [loss: 1.656092] ETA: 9:38:35.94
[Epoch 0/60] [Batch 57/6487] [loss: 1.487051] ETA: 9:43:43.98
[Epoch 0/60] [Batch 58/6487] [loss: 1.158252] ETA: 9:37:07.43
[Epoch 0/60] [Batch 59/6487] [loss: 1.418594] ETA: 9:45:47.30
[Epoch 0/60] [Batch 60/6487] [loss: 0.926793] ETA: 9:34:48.45
[Epoch 0/60] [Batch 61/6487] [loss: 1.154947] ETA: 9:45:05.27
[Epoch 0/60] [Batch 62/6487] [loss: 1.088354] ETA: 9:36:13.17
[Epoch 0/60] [Batch 63/6487] [loss: 1.382724] ETA: 9:45:53.15
[Epoch 0/60] [Batch 64/6487] [loss: 1.270256] ETA: 9:34:49.67
[Epoch 0/60] [Batch 65/6487] [loss: 1.212800] ETA: 9:38:32.91
[Epoch 0/60] [Batch 66/6487] [loss: 1.278388] ETA: 9:29:38.77
[Epoch 0/60] [Batch 67/6487] [loss: 0.964551] ETA: 9:44:08.97
[Epoch 0/60] [Batch 68/6487] [loss: 1.119809] ETA: 9:46:03.46
[Epoch 0/60] [Batch 69/6487] [loss: 1.200209] ETA: 9:36:40.93
[Epoch 0/60] [Batch 70/6487] [loss: 0.928674] ETA: 10:13:18.0
[Epoch 0/60] [Batch 71/6487] [loss: 0.953235] ETA: 9:25:48.79
[Epoch 0/60] [Batch 72/6487] [loss: 1.015199] ETA: 10:07:10.2
[Epoch 0/60] [Batch 73/6487] [loss: 1.366789] ETA: 9:42:28.78
[Epoch 0/60] [Batch 74/6487] [loss: 0.852195] ETA: 9:46:35.86
[Epoch 0/60] [Batch 75/6487] [loss: 0.970752] ETA: 9:27:36.71
[Epoch 0/60] [Batch 76/6487] [loss: 0.945151] ETA: 9:42:07.17
[Epoch 0/60] [Batch 77/6487] [loss: 1.056933] ETA: 9:51:46.30
[Epoch 0/60] [Batch 78/6487] [loss: 0.967538] ETA: 9:53:43.02
[Epoch 0/60] [Batch 79/6487] [loss: 1.335156] ETA: 9:48:42.14
[Epoch 0/60] [Batch 80/6487] [loss: 0.875067] ETA: 9:38:13.29
[Epoch 0/60] [Batch 81/6487] [loss: 1.233467] ETA: 9:47:52.14
[Epoch 0/60] [Batch 82/6487] [loss: 0.987392] ETA: 9:35:12.29
[Epoch 0/60] [Batch 83/6487] [loss: 0.747759] ETA: 9:46:59.54
[Epoch 0/60] [Batch 84/6487] [loss: 1.090464] ETA: 9:31:13.58
[Epoch 0/60] [Batch 85/6487] [loss: 0.750839] ETA: 9:39:39.31
[Epoch 0/60] [Batch 86/6487] [loss: 0.641965] ETA: 9:53:23.73
[Epoch 0/60] [Batch 87/6487] [loss: 0.934432] ETA: 9:43:49.35
[Epoch 0/60] [Batch 88/6487] [loss: 0.903003] ETA: 9:47:28.22
[Epoch 0/60] [Batch 89/6487] [loss: 1.053638] ETA: 9:38:13.14
[Epoch 0/60] [Batch 90/6487] [loss: 1.117509] ETA: 9:47:05.40
[Epoch 0/60] [Batch 91/6487] [loss: 1.065620] ETA: 9:33:11.63
[Epoch 0/60] [Batch 92/6487] [loss: 1.024057] ETA: 9:45:20.57
[Epoch 0/60] [Batch 93/6487] [loss: 0.935583] ETA: 9:36:53.00
[Epoch 0/60] [Batch 94/6487] [loss: 0.878598] ETA: 9:36:23.87
[Epoch 0/60] [Batch 95/6487] [loss: 0.996002] ETA: 9:56:35.51
[Epoch 0/60] [Batch 96/6487] [loss: 0.877100] ETA: 10:01:53.0
[Epoch 0/60] [Batch 97/6487] [loss: 0.801905] ETA: 10:23:37.3
[Epoch 0/60] [Batch 98/6487] [loss: 0.764629] ETA: 10:06:40.1
[Epoch 0/60] [Batch 99/6487] [loss: 1.359203] ETA: 9:42:38.41
[Epoch 0/60] [Batch 100/6487] [loss: 0.983067] ETA: 9:26:30.33
[Epoch 0/60] [Batch 101/6487] [loss: 1.577888] ETA: 10:27:22.7
[Epoch 0/60] [Batch 102/6487] [loss: 0.870093] ETA: 9:43:22.40
[Epoch 0/60] [Batch 103/6487] [loss: 0.907525] ETA: 10:14:02.4
[Epoch 0/60] [Batch 104/6487] [loss: 1.100220] ETA: 9:27:20.07
[Epoch 0/60] [Batch 105/6487] [loss: 0.731582] ETA: 9:25:04.73
[Epoch 0/60] [Batch 106/6487] [loss: 1.249132] ETA: 9:30:11.81
[Epoch 0/60] [Batch 107/6487] [loss: 0.826418] ETA: 9:27:59.80
[Epoch 0/60] [Batch 108/6487] [loss: 1.349152] ETA: 10:18:03.2Traceback (most recent call last):
File "/home/star/whaiDir/PFCFuse/train.py", line 180, in <module>
loss.backward()
File "/home/star/anaconda3/envs/pfcfuse/lib/python3.8/site-packages/torch/_tensor.py", line 521, in backward
torch.autograd.backward(
File "/home/star/anaconda3/envs/pfcfuse/lib/python3.8/site-packages/torch/autograd/__init__.py", line 289, in backward
_engine_run_backward(
File "/home/star/anaconda3/envs/pfcfuse/lib/python3.8/site-packages/torch/autograd/graph.py", line 769, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
KeyboardInterrupt