Are there any small datasets for us to check our submission procedure? I think a lot of submissions will be invalid without it.
You will get a validation score with your submission to debug your submission procedures. A very low score may indicate a problem (you can try to submit random predictions to see). We do not recommend fitting your approach to the validation score – doing so may result in lower test set performance.
Thanks for the reply!
So the situation is, we can submit our models for several times(more than 3) to debug our submission procedure, and finally we have to choose 3 of them to represent our team’s submission?
Thanks a lot!
I’ve got the answer from the thread below.