Code Monkey home page Code Monkey logo

l2p-pytorch's Introduction

L2P PyTorch Implementation

This repository contains PyTorch implementation code for awesome continual learning method L2P,
Wang, Zifeng, et al. "Learning to prompt for continual learning." CVPR. 2022.

The official Jax implementation is here.

Environment

The system I used and tested in

  • Ubuntu 20.04.4 LTS
  • Slurm 21.08.1
  • NVIDIA GeForce RTX 3090
  • Python 3.8

Usage

First, clone the repository locally:

git clone https://github.com/JH-LEE-KR/l2p-pytorch
cd l2p-pytorch

Then, install the packages below:

pytorch==1.12.1
torchvision==0.13.1
timm==0.6.7
pillow==9.2.0
matplotlib==3.5.3

These packages can be installed easily by

pip install -r requirements.txt

Data preparation

If you already have CIFAR-100 or 5-Datasets (MNIST, Fashion-MNIST, NotMNIST, CIFAR10, SVHN), pass your dataset path to --data-path.

The datasets aren't ready, change the download argument in datasets.py as follows

CIFAR-100

datasets.CIFAR100(download=True)

5-Datasets

datasets.CIFAR10(download=True)
MNIST_RGB(download=True)
FashionMNIST(download=True)
NotMNIST(download=True)
SVHN(download=True)

Training

To train a model via command line:

Single node with single gpu

python -m torch.distributed.launch \
        --nproc_per_node=1 \
        --use_env main.py \
        <cifar100_l2p or five_datasets_l2p> \
        --model vit_base_patch16_224 \
        --batch-size 16 \
        --data-path /local_datasets/ \
        --output_dir ./output 

Single node with multi gpus

python -m torch.distributed.launch \
        --nproc_per_node=<Num GPUs> \
        --use_env main.py \
        <cifar100_l2p or five_datasets_l2p> \
        --model vit_base_patch16_224 \
        --batch-size 16 \
        --data-path /local_datasets/ \
        --output_dir ./output 

Also available in Slurm system by changing options on train_cifar100_l2p.sh or train_five_datasets.sh properly.

Multinode train

Distributed training is available via Slurm and submitit:

pip install submitit

To train a model on 2 nodes with 4 gpus each:

python run_with_submitit.py <cifar100_l2p or five_datasets_l2p> --shared_folder <Absolute Path of shared folder for all nodes>

Absolute Path of shared folder must be accessible from all nodes.
According to your environment, you can use NCLL_SOCKET_IFNAME=<Your own IP interface to use for communication> optionally.

Evaluation

To evaluate a trained model:

python -m torch.distributed.launch --nproc_per_node=1 --use_env main.py <cifar100_l2p or five_datasets_l2p> --eval

Result

Test results on a single gpu.

Split-CIFAR100

Name Acc@1 Forgetting
Pytorch-Implementation 83.77 6.63
Reproduce Official-Implementation 82.59 7.88
Paper Results 83.83 7.63

5-Datasets

Name Acc@1 Forgetting
Pytorch-Implementation 80.22 3.81
Reproduce Official-Implementation 79.68 3.71
Paper Results 81.14 4.64

Here are the metrics used in the test, and their corresponding meanings:

Metric Description
Acc@1 Average evaluation accuracy up until the last task
Forgetting Average forgetting up until the last task

License

This repository is released under the Apache 2.0 license as found in the LICENSE file.

Cite

@inproceedings{wang2022learning,
  title={Learning to prompt for continual learning},
  author={Wang, Zifeng and Zhang, Zizhao and Lee, Chen-Yu and Zhang, Han and Sun, Ruoxi and Ren, Xiaoqi and Su, Guolong and Perot, Vincent and Dy, Jennifer and Pfister, Tomas},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={139--149},
  year={2022}
}

l2p-pytorch's People

Contributors

jh-lee-kr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

l2p-pytorch's Issues

vit_base_patch16_224

Hello!
Thank you so much for implementing the pytorch version of l2p!
When obtaining the pre-training model, I encountered the following problems. Are there any other friends who encounter similar problems?
Snipaste_2023-04-24_21-29-49
Thank you for your work! Looking forward to your reply, best wishes!

Diversifying prompt-selection

Thank you very much your pytorch implementation for L2P!
I have a question about prompt selection.
In the paper, they use prompt frequency based weight to select diverse prompt, but i think i can't find that part in the code.
I think I can't find that part not only in your code but also in the official jax-based code, so could you let me know if there's anything I'm missing?

Thank you very much for your work!!!

Question about the classification head layer

Hi, I am very interested in your work, and your reproduction work contributes a lot to the incremental learning community.

In the process of studying the code, I found that the classification layer is only updated at the first task, and then no longer updated in subsequent tasks, is this normal? Is the official code also set up like this? Look forward to your answer,

The Prompt parameters of five_datasets in pytorch-implementation is different from that given in paper

Hi!
Thank you so much for implementing the pytorch version of l2p!
I'm recently trying to reproduce the result of "five_datasets" in pytorch-implementation you gave, but I have noticed that the Prompt parameters of five_datasets in pytorch-implementation are different from that given in the paper. The prompt length you gave in the paper is 5, but it's 10 in the code. And I wonder if this will affect the final experimental results?

Thank you for your work! Looking forward to your reply, best wishes!

Loss is NaN.

Hi @Lee-JH-KR,

Thank you for realising the PyTorch version of L2P.

I am getting the following error. Do you have any suggestions for it?

Loss is nan, stopping training.

I really appreciate any help you can provide.

Question about loss function

In engine.py line 68
loss = loss - args.pull_constraint_coeff * output['reduce_sim']
In the paper is
loss = loss + args.pull_constraint_coeff * output['reduce_sim']?

Question regarding prompt selection

Hi!

Thank you for implementing the pytorch version of l2p!
While running the code on CIFAR100 dataset, I find that for all tasks, only prompt with index 0, 4, 5, 8, 9 will be selected.
However, if the same subset of prompts is selected for all tasks, it will be updated for each task and wouldn't this still cause catatstrophic forgetting? Do you have an idea of why this is happening and why l2p seem to suffer from much less forgetting?

Thank you!

Doubts regarding Transferring previous learned prompt params to the new prompt

Hi @JH-LEE-KR, thanks for this amazing Pytorch implementation of L2P. I have the following doubts in the code:

  1. In engine.py > train_and_evaluate() : Transfer previous learned prompt params to the new prompt. I am confused about this - the top_k prompts used for any task will be overlapping as there aren't enough dedicated (mutually exclusive) prompts for each task. So why are we shifting the weights of prompts from prev_idx to cur_idx ?
    model.prompt.prompt[cur_idx] = model.prompt.prompt[prev_idx]
    Based on my understanding, if the prompt pool size is 10, then the 10 prompts will be common/shared across all tasks and at every batch training, top k (5 prompts) will get updated based on query function. Kindly help me understand this.

  2. Regarding the usage of train_mask and class_mask:
    Does L2P not initialize its own classifier for every new task (that has a union of all classes seen till that task)? Then why do we need to mask out certain classes just before loss computation?

L2P reproduce (about freeze layer & shuffle argument)

Hello. Thank you for the PyTorch implementation of L2P. I have a question regarding the reimplementation of L2P on CIFAR100. In the paper, the freeze layer includes blocks, patch_embed, and cls_token. However, the --freeze argument in the code you provided includes blocks, patch_embed, cls_token, pos_embed, and norm. I have implemented it following the paper's approach, but the performance is not matching the GitHub code. (The shuffle element is also set to False.) Do you have any ideas or suggestions?

How to use the rehearsal buffer?

Hi, amazing work!

I noticed in the paper that l2p can use the rehearsal buffer to further improve performance, but the repository doesn't seem to include code implementation for this part. I have a few questions about the implementation of this part:
(1) Random sampling or herding sampling?
(2) In addition to having old samples in dataloader, are any other operations used, such as distillation, balanced fine-tuning, etc.
(3) Will the official implementation of this part be added to the code later?
(4) Whether the rehearsal buffer can further improve the performance of "dualprompt" also?

Looking forward to your reply, best wishes!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.