This is the official starting repository for the Continual Learning Challenge held in the 3rd CLVision Workshop @ CVPR 2022.
Please refer to the challenge website for more details!
Updates:
- The competition report paper is available on arXiv!
- Reports from the top-3 teams per track are now available under the
reports
folder.
The devkit is based on the Avalanche library. We warmly recommend looking at the documentation (especially the "Zero To Hero tutorials") if this is your first time using it! Avalanche is added as a Git submodule of this repository.
The recommended setup steps are as follows:
git clone --recurse-submodules https://github.com/ContinualAI/clvision-challenge-2022.git
cd clvision-challenge-2022
conda env create -f environment.yml
-
Setup your IDE so that the avalanche submodule is included in the PYTHONPATH. Note: you have to include the top-level folder, not
avalanche/avalanche
!- For Jetbrains IDEs (PyCharm), this can be done from the Project pane (usually on the right) by right-clicking on the "avalanche" folder -> "Mark Directory as" -> "Sources Root".
- For VS Code, follow the official documentation.
-
Download and extract the dataset: in order to download the dataset, we ask all participants to accept the dataset terms and provide their email addresses through this form. You will immediately receive the download instructions at the provided address. We recommend extracting the dataset in the default folder
$HOME/3rd_clvision_challenge/demo_dataset/
. The final directory structure should be like this:
$HOME/3rd_clvision_challenge/challenge/
├── ego_objects_challenge_test.json
├── ego_objects_challenge_train.json
├── images
│ ├── 07A28C4666133270E9D65BAB3BCBB094_0.png
│ ├── 07A28C4666133270E9D65BAB3BCBB094_100.png
│ ├── 07A28C4666133270E9D65BAB3BCBB094_101.png
│ ├── ...
The aforementioned steps should be OS-agnostic. However, we recommend setting up your dev environment using a mainstream Linux distribution.
The starting template for the object classification track is based on the tried and tested strategies from Avalanche.
In particular, the starting template can be found in
starting_template_instance_classification.py
. The
default template implements a working train/eval loop that uses the
Naive strategy.
The Naive strategy is a plain fine-tuning loop that, given the optimizer, number of epochs, the minibatch size, and the loss function, will just run a very forgetting-prone training loop. This should be taken as the lower bound for a solution. This means that the basic loop is already there, ready to be customized. There are two main ways to implement your solution:
- Override parts of the base class
SupervisedTemplate
in order to customize the epoch loop, the backward and forward operations, etcetera. - Implement your solution as a plugin (many mainstream techniques are implemented in Avalanche as plugins, see the documentation).
We suggest you to study the From Zero To Hero tutorials to learn about Avalanche.
The starting point for the detection tracks are
starting_template_category_detection.py
and
starting_template_instance_detection.py
.
That entry point uses the
(ObjectDetectionTemplate)
template, which is the template you should customize to implement your CL strategy. The recommended way to do this is to create
a child class.
That training template is based on Avalanche training templates. The training loop is an almost exact implementation of the one shown in the official TorchVision Object Detection Finetuning Tutorial, especially the train_one_epoch method.
A schematic visualization of the training loop, its events, and an example of a plugin implementing EWC is shown below:
Solutions must be submitted through the CodaLab portal:
- Instance Classification track: submissions portal
- Category Detection track: submissions portal
- Instance Detection track: submissions portal
A solution consists of a zip file containing "N" (track-dependent) files. Each file must contain the predictions obtained on the full test set by running an eval pass after each training experience. The devkit already contains a plugin that will take care of storing such output. Make sure you do NOT include intermediate directories inside the archive. Do not change the name of files produced by the plugin.
The maximum number of allowed submissions is 20. Only 3 solutions can be submitted each day.
Note: the evaluation for detection tracks may take some minutes to complete.
- The devkit will be updated quite often. We recommend checking if there are new updates frequently.
- The
InteractiveLogger
will just print the progress to stdout (and it is quite verbose). Consider using dashboard loggers, such as Tensorboard or Weights & Biases. See the tutorial on loggers here. You can use more than one logger at the same time!