The goal of NICO Challenge is to facilitate the OOD (Out-of-Distribution) generalization in visual recognition through promoting the research on the intrinsic learning mechanisms with native invariance and generalization ability. The training data is a mixture of several observed contexts while the test data is composed of unseen contexts. Participants are tasked with developing reliable algorithms across different contexts (domains) to improve the generalization ability of models.
The NICO Challenge is an image recognition competition containing two main tracks: 1) common context generalization (Domain Generalization, DG) track; 2) hybrid context generalization track. The difference of these two tracks is whether the context used in training data for all the categories are aligned (e.g. common contexts) and the availability of context (domain) labels. Same as the classic DG setting, all the contexts are common contexts that are aligned for all categories in both training and test data in the common context generalization track. Nevertheless, both common and unique contexts are used for the hybrid context generalization track where the contexts varies across different categories. Context labels are available for the common context generalization track while unavailable for the hybrid context generalization track.
The NICO++ dataset is reorganized to training, open validation, open test and private test sets for each track. There are 60 categories for both two tracks and 40 of them are shared in both tracks (totally 80 categories in NICO++). For the common context generalization, 88866 samples are for training, 13907 for public test (images are public while labels are unavailable) and 35920 (both images and labels are unavailable) for private test. For the hybrid context generalization, 57425 samples are for training, 8715 for public test and 20079 for private test.
The training and public data are available in data-link. The general procedure of NICO Challenge includes two phases. First in phase 1, all the competitors are requested to submit their results on public validation data. The evaluation will be automatically done via the online platform based on our proposed metric (i.e., overall accuracy on all test images). A public leaderboard will keep rolling during this phase and the top 10 teams will be invited into phase 2 where they are required to upload their source code and their models are tested on private test data.
We will announce the winners and invite them to submit a report that interprets their advanced techniques in Workshop on ECCV2022. The reports should be up to 14 pages (including references) in ECCV22 format. Also we will invite the winners to give a presentation in this workshop.
NOTE: Please note that no external data (including ImageNet) can be used in both the pretraining and training phase and all models should train from scratch since the external information about contexts or categories can help the model learn about the test data which should be totally unseen in the training phase. The uploaded models will be checked if external training data are used in the pretraining or training phase.
|2022-04-18||Releasing the NICO++ dataset. (See the DATASET)|
|2022-04-20||Start Date of Phase 1.|
|2022-07-10||Deadline of Phase 1. This is the last day for team registration and result submission.|
Notification of winner teams in Phase 1.
Start Date of Phase 2.
|2022-07-15||Code submission deadline. This is the last day for Top 10 teams to submit the model.|
|2022-07-30||Deadline of Phase 2. This is the last day for retraining and testing the submitted models.|
|2022-08-10||Notification of Final Winners.|
All deadlines are at 23:59 AoE on the corresponding day unless otherwise noted.