Question 2/5 fast.ai v4 lecture 2

Does using a validation set guarantee we will not overfit?

Answer

Not at all! Every change in hyperparameters, every run of training on our data where we use our validation set to verify results, makes it more plausible that we will overfit to our validation set! In this case, the overfitting would come from picking an architecture, picking training params, that perform well on our validation set but that do not generalize to other unseen data.

To be rigorous about this, we should set aside a third bit of data, called the test set. This is a part of our dataset that is neither used during the training nor for calculating our metrics. We do not look at this until the whole project is finished

Relevant part of lecture

supplementary material

How (and why) to create a good validation set - a blog post by Rachel Thomas, probably the most definitive and practical resource on creating train - val - test splits on the Internet