What is the Evaluation Metric for the competition?

In the ISIC 2018 Task 3 challenge, there were many metrics such as Balanced Accuracy, AUC, and all. Will it be similar? Also, what’s the difference between accuracy and Balanced Accuracy, as values of both these are calculated differently?

+1 on that, urgently, if you please. Without the evaluation metric, we are all pretty much navigating blind.

Ranking is done using balanced accuracy, same metric as last year. The outlier class in the test set is counted the same as every other class.

The evaluation code is publicly available on GitHub: GitHub - ImageMarkup/isic-challenge-scoring: Automated scoring code for the ISIC Challenge.

3 Likes