Competition Rules

Dataset

1.        Redistribution or transferring of competition data or data links is not allowed during the competition. Participants should use the data only for this competition. The competition data will be publicly available and free for use after the competition.

2.         All participants are not allowed to use external medical image dataset during this challenge. The foundation models in other areas such as natural images and natural language processing area are allowed. All participants should give links of pretrained model they used in this challenge. 

3.         In the setting of our tasks, the few-shot number is counted by the number of patients rather than the number of images. All participants can only use corresponding few-shot samples in training set. Using full traning set  are not allowed.

4.         All participants should abide the CC-BY-SA 4.0 liscense.

Submission

1.        All participants must submit a complete solution to this competition during the evaluation phase. A complete solution includes a Docker container (tar file) and a 2-8-page qualified technical report.

2.        All participants should have docker expertise and the submitted Docker tar file size is preferred less than 8 GB. A Docker size of over 12 GB will raise an error. The Docker should execute for at most  3 hours and occupy no more than 10 GB GPU Memory to generate prediction results in evaluation phase. Otherwise, an error will be returned.

3.        All participants should register for this competition with their real names, affiliations (including department, full name of university/institute/company, and country), and affiliation E-mails. Incomplete and redundant registrations will be removed without notice.

4.        All participants are not allowed to register multiple teams and accounts (only listed names in the signed document will be considered). Participants from the same research group are also not allowed to register multiple teams. One participant can only join one team. Organizers keep the right to disqualify such participants.

5.        All participants should develop fully automatic methods and any manual interventions (e.g., manually annotating the unlabelled images) are not allowed. For a fair comparison, participants should post the used external data or pre-trained models (freely available) links in evaluation phase.