We have put up a starter solution for Task 1 and Task 3 here: https://github.com/yuanqing811/ISIC2018
Hope you find it useful. Please let us know if you find it useful either by commenting or liking the repository. We will provide an updated baseline solution if there is enough interest.
All the best!
Fantastic - thank you for sharing this!
Regarding Task 3:
- Make sure you don’t have images from the same lesion in your training- and validation splits. That won’t be the case for the test set and you might be disappointed by a much lower accuracy then. With the file posted here you can find out which images correspond to each other in the training images.
- Be aware not “raw” accuracy but balanced accuracy as discussed here is the goal metric for task 3.
Thanks for the info! We will look into incorporating the lesion ID information in the validation split.
Yes – we are aware that the metric is balanced accuracy which in our understanding is same as the mean recall. Currently, the raw accuracy that we are getting is about 88%, but the balanced accuracy is about 76%.
This is really a great contribution to the effort. Thank you for sharing.
Validation split now takes into account lesion IDs. Balanced accuracy has decreased from 76% to 68.5%. Thanks for bringing the issue to our attention.
We have updated the code to include validation/test prediction and submission along with k-fold cross validation and test time augmentation code. As requested by the organizers, we are not disclosing the validation scores, but have confirmed that the outputs are in the correct format and give reasonable scores.
Hope this helps – we won’t be providing any more updates (other than bug fixes). We were not able to get anything to train stably for task 2 so looking forward to the solutions when they become public.
All the best to everyone!