Why is it mandatory to upload a PDF file during submission?

Why would I have a PDF with my approach if I still have not tested any of my approaches against the test set?! This really makes no sense to me.

Can you kindly remove this restriction until I know which approach is best?


You must upload a PDF manuscript with your submissions describing your approach and your own internal validation results.

You will get no feedback on the test set, and your manuscript will not contain any analysis of test set performance.

As a warning, you should not try to fit the validation score – doing so may result in worse performance on the test set. The purpose of the validation score is to debug your submission scripts (a very low score may indicate you have a bug).


Sorry to be blunt but …
So if I have 5 different approaches I need to compose 5 different papers and then blindly submit 5 CSV’s and wait for an irrelevant (as you stated) feedback regarding the performance of the models?

I can not fathom how this methodology was created; no scientific merit behind it. Neither Kaggle nor Codalab work in this way.


Interesting post

Thanks for your comments -

You’re welcome to submit one PDF that summarizes all the approaches that you submit and associate this PDF with all submissions. We anticipate that there will be many ways that participants will build their classifiers and those are the details that we expect in the PDF, as of course you won’t yet have the results of the submission.

We also have a maximum of 3 submissions per team. While we won’t give you the results until the challenge is completed, we anticipate that teams will perform internal strategies to improve performance (such as cross validation, etc) of their submissions.


We are aware this is a different procedure from Codalab and Kaggle. The intention is to help improve the scientific value of the submissions to the community – scores alone don’t explain differences in performance across approaches.

The PDFs will go through a manual review process. Inadequate documentation may result in disqualification.


FYI: During testing I submitted a csv with purely random predictions and its score was 0.3 if that is a useful reference for those submitting