当前位置:网站首页>[Deep Learning] Note 2 - The accuracy of the model in the test set is greater than that in the training set

[Deep Learning] Note 2 - The accuracy of the model in the test set is greater than that in the training set

2022-08-11 11:57:00 aaaafeng

Preface

Activity address: CSDN 21-day Learning Challenge
Blogger homepage: Aaaafeng's homepage_CSDN

Keep input, keep output!(quoting a sentence from my a friend)



1. Description of the problem

In the process of model training, I suddenly found that the accuracy rate of the model is actually higher on the test set than on the training set.But we know that the way we train the model is to minimize the loss on the training set.Therefore, it should be normal for the model to perform better on the training set.
So, what caused the higher accuracy on the test set?

Model training results:

insert image description here

2. Fix the problem

2. 1. Underfitting

Later I consulted a big boss, she said: "Train a few more times to see, the first few times have been underfitting", I immediately felt, Good suggestionstrong>!

Increase the number of training epochs:
insert image description here

Sure enough!With increasing training epochs, the model accuracy slowly returned to the right track.The accuracy on the training set again exceeds that on the test set.

2. 2. Hysteresis of mini-batch statistics

But I still have some doubts, why in the underfitting state with fewer training cycles, the model has a higher accuracy on the test set?What is the relationship between them?
There is a part of the explanation given by a blog post, which I think is very reasonable and more in line with the situation I encountered:

The accuracy of the training set is generated after each batch, while the accuracy of the validation set is generally generated after an epoch. The model during validation is trained after batches, and there is a lag.It can be said that the model that has been trained about the same is used for verification, of course, the accuracy rate is higher.

That is, the problem arises with the way individuals specifically count the accuracy of the training set.If the accuracy of the model on the training set is counted after each training cycle, rather than at the end of each mini-batch, this will not happenThe problem.
Of course, just talking is not enough, you have to practice.I checked the previous model code and found that the accuracy on my training set was indeed counted after each mini-batch.Then you might as well try the accuracy of the training set and count it after each cycle.

Accuracy on the training set after each training cycle (train acc 2):
insert image description here

It is easy to find that even in the state of underfitting, if the training set and test set accuracy are statistically the same, the model will still be more accurate on the training set.


References:
Neural Networks and Deep Learning - Validation Set (Test Set) Accuracy Higher thanReasons for training set accuracy


Summary

When you encounter a problem, looking at other people's thoughts may make you feel stunned in an instant.It is not advisable for a person to drill into a bull's horn.

原网站

版权声明
本文为[aaaafeng]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/223/202208111144116023.html