Preventing overfitting is one of the fundamental challenges in automatic learning. Good practices include the diversification of training data, the use of techniques such as regularization or early stopping, and above all, validation on completely independent datasets. Current benchmarks show their limits in the face of actors who can optimize their models specifically for these tests. The scientific community now pleads for dynamic assessments that would regularly change their parameters, making overfitting much more difficult.