How do I solve Error processing submission

covalic.error: Missing columns in CSV: [‘AKIEC’, ‘BCC’, ‘BKL’, ‘DF’, ‘MEL’, ‘NV’, ‘VASC’].
HTTP push failed (https://challenge.kitware.com/api/v1/covalic_submission/5b5b668b56357d238b3db1ea/score). Response: {“message”: “JSON parameter score must be passed in request body.”, “type”: “rest”}
HTTPError: 400 Client Error: Bad Request for url: https://challenge.kitware.com/api/v1/covalic_submission/5b5b668b56357d238b3db1ea/score

Hi @fercho,

The key line in this error is the first one:

covalic.error: Missing columns in CSV: [‘AKIEC’, ‘BCC’, ‘BKL’, ‘DF’, ‘MEL’, ‘NV’, ‘VASC’].

You are likely missing a header row from your submission CSV, identifying each column by one of these diagnosis labels.

Thanks for your answer:

as far as I know the format of the csv file should be as follows:
image MEL NV BCC AKIEC BKL DF VASC

is that right?

I will give it a try again but I am really sure my file has the header you mentioned before.

I also have other questions:

1 Should the image name file contain .jpg or without. I am following the format from the training ground truth provided.

2 should I submit predictions as 0 1 0 0 0 0 0 or as probalilities

Thanks for your help

This is a comma-separated-value (CSV) value, so all columns should be delimited by commas, not spaces. Simply follow the format of the Training ground truth file, to be safe. However, the order of the columns and rows does not matter, as long as all are properly labeled (which is required).

As for your other questions:

  1. Do not include the .jpg extension. The values should be like ISIC_0001234.
  2. Please submit your predictions as floating-point probabilities, in the interval [0.0, 1.0]. Probabilities will allow us to calculate additional ROC curve metrics on your data. For accuracy scoring purposes (the goal metric) the highest probability diagnosis is used as the prediction.
    • If possible, also normalize your predictions to have 0.5 as the binary classification threshold. Per the Task 3 description:

    Diagnosis confidences are expressed as floating-point values in the closed interval [0.0, 1.0], where 0.5 is used as the binary classification threshold. Note that arbitrary score ranges and thresholds can be converted to the range of 0.0 to 1.0, with a threshold of 0.5, trivially using the following sigmoid conversion:

    1 / (1 + e^(-(a(x - b))))

    where x is the original score, b is the binary threshold, and a is a scaling parameter (i.e. the inverse measured standard deviation on a held-out dataset). Predicted responses should set the binary threshold b to a value where the classification system is expected to achieve 89% sensitivity, although this is not required.

Thanks a lot, I managed to submit.
It was one space between the names of the classes.
I have one question?

Can I submit one with predictions as one hot encoded and one as probabilities after the soft-max using the same method?

You are free to submit what you see fits. Intuitively, those two methods should be identical for the primary evaluation metric (you can try out with the validation submission).

Thanks for your fast and good answers I have submitted!!