How the submissions are supposed to be evaluated. following the accuracy or the AUC?
We are still finalizing the details of the evaluation criteria, but it will likely be the similar or the same to the 2018 Challenge Task 3 metric of balanced multiclass accuracy.
We’ll post updates as soon as we have a more rigorous final description.
Another question regarding submissions. We form a team between different universities. What will be the system of making submissions, type daily laderboard (like kaggle) or only at the end? We could also do more than one submission (from the last competition, each team seems to have three). Regards, and thank you very much
In the case of ISIC2019, what is the possible number of submissions during the week? The result to each submission will be immediate?
Results will only be visible at the end. You will get feedback from the validation set (very small subset of test set) in order to debug your submissions and make sure your submission process is working correctly. But those validation results will not be public and you are discouraged from sharing those results.
The number of submissions per week is still being worked out and will be communicated clearly with the test dataset.
The organizers should also specify the format in which the solutions should be sent (an example file perhaps). Some of us are participating for the first time in this contest and it is unusual that these basic points are not explained.
An example submission CSV file will be provided with the test data.