Professional Documents
Culture Documents
Результаты работы программы
Результаты работы программы
Результаты работы программы
Обучение модели.
Epoch: 1 LR: 0.00100
Train Loss: tensor(0.0157, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 2 LR: 0.00100
Train Loss: tensor(0.0087, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 3 LR: 0.00100
Train Loss: tensor(0.0067, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 4 LR: 0.00100
Train Loss: tensor(0.0056, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 5 LR: 0.00100
Train Loss: tensor(0.0051, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 6 LR: 0.00100
Train Loss: tensor(0.0045, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 7 LR: 0.00100
Train Loss: tensor(0.0043, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 8 LR: 0.00100
Train Loss: tensor(0.0040, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 9 LR: 0.00100
Train Loss: tensor(0.0040, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 10 LR: 0.00100
Train Loss: tensor(0.0036, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 11 LR: 0.00100
Train Loss: tensor(0.0036, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 12 LR: 0.00100
Train Loss: tensor(0.0034, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 13 LR: 0.00100
Train Loss: tensor(0.0034, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 14 LR: 0.00100
Train Loss: tensor(0.0033, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 15 LR: 0.00100
Train Loss: tensor(0.0032, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 16 LR: 0.00100
Train Loss: tensor(0.0031, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 17 LR: 0.00100
Train Loss: tensor(0.0030, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 18 LR: 0.00100
Train Loss: tensor(0.0030, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 19 LR: 0.00100
Train Loss: tensor(0.0030, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 20 LR: 0.00100
Train Loss: tensor(0.0029, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 21 LR: 0.00100
Train Loss: tensor(0.0028, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 22 LR: 0.00100
Train Loss: tensor(0.0028, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 23 LR: 0.00100
Train Loss: tensor(0.0028, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 24 LR: 0.00100
Train Loss: tensor(0.0027, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 25 LR: 0.00100
Train Loss: tensor(0.0027, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 26 LR: 0.00010
Train Loss: tensor(0.0024, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 27 LR: 0.00010
Train Loss: tensor(0.0024, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 28 LR: 0.00010
Train Loss: tensor(0.0024, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 29 LR: 0.00010
Train Loss: tensor(0.0023, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 30 LR: 0.00010
Train Loss: tensor(0.0023, device='cuda:0', grad_fn=<DivBackward0>)
Epoch: 31 LR: 0.00010
Train Loss: tensor(0.0023, device='cuda:0', grad_fn=<DivBackward0>)
12284.927605867386