Task 2 Evaluation and Superpixel Generation

As described on the website, we need to compute Jaccard index (also known as IoU, Intersect/Union) for each pair of groundtruth mask and predicted mask. Different from task 1, task 2 contains many groundtruth masks which are all zeros. How can we compute Jaccard index, in case which both the groundtruth and predicted mask are all zeros (i.e. Intersect=0, Union=0)?

Besides, the groundtruth of task 2 is generated after superpixel preprocessing using SLIC algorithm while the hyperparameters were not released, such as number of superpixels, compactness for each image. Could you please provide specific values when generating the groundtruth? It is also beneficial if you could provide superpixel visualization like 2017 Challenge.

Thank you.

Hi Dandi,

I would recommend standardizing the sizes of all images in your pipeline, and computing the Jaccard over all pixels in the set, rather than image-by-image. This will avoid the scenario you describe.

We will try to get you the additional data you request.

Hi @dandic,

Regarding superpixels, while we did use SLIC-generated superpixels to create ground truth annotations for Task 2, this is only a detail of our internal methodology for generating ground truth. Superpixels are not a part of the ISIC 2018 Task 2 challenge.

Our previous Challenges in 2016 and 2017 required participants to consider dermoscopic features within the context of a superpixel grid. However, we are concerned that this adds a level of additional complexity (both in describing the task goal, and in encoding the superpixel maps) that is not strictly necessary to the fundamental goal of localizing dermoscopic features. Accordingly, and to ensure that we have a fair evaluation of all Task 2 algorithms, we’re not releasing any superpixel masks with the Task 2 images.

Of course, to the extent that is useful to the development or functioning of your algorithm, you are free to generate and use your own superpixels and associated visualizations. Please let us know in your associated submission abstract if you use any of these techniques!

Thanks for the response while we found fixed size was set as part of the evaluation in task 2(ISIC Challenge). Will 256*256 be used in the ranking? If not, could you please provide the specific size for ranking? Besides, there are multiple ways to resize images, such as with or without pre-filtering.

Task 1 evaluation does not mention fixed size. Can we understand average per-image Jaccard index will be computed on original image size?

Thank you.