Code Monkey home page Code Monkey logo

efficienttrain's Introduction

EfficientTrain++ (TPAMI 2024 & ICCV 2023)

This repo releases the code and pre-trained models of EfficientTrain++, an off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.

[TPAMI 2024] EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training
Yulin Wang, Yang Yue, Rui Lu, Yizeng Han, Shiji Song, and Gao Huang
Tsinghua University, BAAI
[arXiv]

[ICCV 2023] EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones
Yulin Wang, Yang Yue, Rui Lu, Tianjiao Liu, Zhao Zhong, Shiji Song, and Gao Huang
Tsinghua University, Huawei, BAAI
[arXiv]

  • Update on 2024.05.14: I'm highly interested in extending EfficientTrain++ to CLIP-style models, multi-modal large language models, generative models (e.g., diffusion-based or token-based), and advanced visual self-supervised learning methods. I'm always open to discussions and potential collaborations. If you are interested, please kindly send an e-mail to me ([email protected]).

Overview

We present a novel curriculum learning approach for the efficient training of foundation visual backbones. Our algorithm, EfficientTrain++, is simple, general, yet surprisingly effective. As an off-the-shelf approach, it reduces the training time of various popular models (e.g., ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer) by 1.5−3.0× on ImageNet-1K/22K without sacrificing accuracy. It also demonstrates efficacy in self-supervised learning (e.g., MAE).

Highlights of our work

  • 1.5−3.0× lossless training or pre-training speedup on ImageNet-1K and ImageNet-22K. Practical efficiency aligns with theoretical performance. Both upstream and downstream performance are not affected.
  • Effective for diverse visual backbones, including ConvNets, isotropic/multi-stage ViTs, and ConvNet-ViT hybrid models. For example, ResNet, ConvNeXt, DeiT, PVT, Swin, CSWin, and CAFormer.
  • Dramatically improving the performance of relatively smaller models (e.g., on ImageNet-1K, DeiT-S: 80.3% -> 81.3%, DeiT-T: 72.5% -> 74.4%).
  • Superior performance across varying training budgets (e.g., training cost of 0 - 300 epochs or more).
  • Applicable to both supervised learning and self-supervised learning (e.g., MAE).
  • Optional techniques optimized for limited CPU/memory capabilities (e.g., cannot support high data pre-processing speed).
  • Optional techniques optimized for large-scale parallel training (e.g., 16-64 GPUs or more).

Catalog

  • ImageNet-1K Training Code
  • ImageNet-1K Pre-trained Models
  • ImageNet-22K -> ImageNet-1K Fine-tuning Code
  • ImageNet-22K Pre-trained Models
  • ImageNet-22K -> ImageNet-1K Fine-tuned Models

Installation

We support PyTorch>=2.0.0 and torchvision>=0.15.1. Please install them following the official instructions.

Clone this repo and install the required packages:

git clone https://github.com/LeapLabTHU/EfficientTrain
pip install timm==0.4.12 tensorboardX six

The instructions for preparing ImageNet-1K/22K datasets can be found here.

Training

See TRAINING.md for the training instructions.

Pre-trained models & evaluation & fine-tuning

See EVAL.md for the pre-trained models and the instructions for evaluating or fine-tuning them.

Results

Supervised learning on ImageNet-1K

ImageNet-22K pre-training

Supervised learning on ImageNet-1K (varying training budgets)

Object detection and instance segmentation on COCO

Semantic segmentation on ADE20K

Self-supervised learning results on top of MAE

TODO

This repo is still being updated. If you need anything, no matter it is listed in the following or not, please send an e-mail to me ([email protected]).

  • A detailed tutorial on how to implement this repo to train (customized) models on customized datasets.
  • ImageNet-22K Training Code
  • ImageNet-1K Self-supervised Learning Code (EfficientTrain + MAE)
  • EfficientTrain + MAE Pre-trained Models

Acknowledgments

This repo is mainly developed on the top of ConvNeXt, we sincerely thank them for their efficient and neat codebase. This repo is also built using DeiT and timm.

Citation

If you find this work valuable or use our code in your own research, please consider citing us:

@article{wang2024EfficientTrain_pp,
        title = {EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training},
       author = {Wang, Yulin and Yue, Yang and Lu, Rui and Han, Yizeng and Song, Shiji and Huang, Gao},
      journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
         year = {2024},
          doi = {10.1109/TPAMI.2024.3401036}
}
@inproceedings{wang2023EfficientTrain,
        title = {EfficientTrain: Exploring Generalized Curriculum Learning for Training Visual Backbones},
       author = {Wang, Yulin and Yue, Yang and Lu, Rui and Liu, Tianjiao and Zhong, Zhao and Song, Shiji and Huang, Gao},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
         year = {2023}
}

efficienttrain's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

efficienttrain's Issues

about code

Dear author, what about the greedy algorithm and frequency trimming implemented in that part of the code?

Cannot reproduce the results on DeiT-Tiny for ImageNet-1k

Thanks for providing the code of EfficientTrain! We have some questions about the results of experiments on DeiT-Tiny for ImageNet-1k.

We try to reproduce the results of Table8 (a) in origin paper. We use the script in README to train DeiT-Tiny:

result_dir=/result/et_train/deit/imagenet_tiny_run1
dataset_dir=/home/data/imagenet/image-net.org/data/ILSVRC/2012
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python ET_training.py \
--data_path $dataset_dir \
--output_dir $result_dir \
--model deit_tiny_patch16_224 \
--final_bs 256 --epochs 300 \
--num_gpus 8 --num_workers 8

The part of log is as follows:

{"train_lr": 0.0032559395282436713, "train_min_lr": 0.0032559395282436713, "train_loss": 4.354069483824647, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.4368886798620224, "test_loss": 1.8930224837625729, "test_acc1": 57.76200146789551, "test_acc5": 81.58200245544434, "epoch": 99, "n_parameters": 5698984}
{"train_lr": 0.0032384000923775754, "train_min_lr": 0.0032384000923775754, "train_loss": 4.347171298299845, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.4361276876850006, "test_loss": 1.8822389315156376, "test_acc1": 57.42200148498535, "test_acc5": 81.62800247436523, "epoch": 100, "n_parameters": 5698984}
...
{"train_lr": 0.0011431063586592564, "train_min_lr": 0.0011431063586592564, "train_loss": 4.027055377188401, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.5856632233048097, "test_loss": 1.4776335459421663, "test_acc1": 66.34400211059571, "test_acc5": 87.6380025302124, "epoch": 199, "n_parameters": 5707432}
{"train_lr": 0.001122893755989195, "train_min_lr": 0.001122893755989195, "train_loss": 4.016480489228016, "train_weight_decay": 0.05000000000000049, "train_grad_norm": Infinity, "test_loss": 1.449809268116951, "test_acc1": 66.3800021170044, "test_acc5": 87.57600238250733, "epoch": 200, "n_parameters": 5707432}
{"train_lr": 0.0011027916320893176, "train_min_lr": 0.0011027916320893176, "train_loss": 4.003306985140229, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.5942679251997899, "test_loss": 1.461431296870989, "test_acc1": 66.61200170288086, "test_acc5": 87.78600267547607, "epoch": 201, "n_parameters": 5707432}
...
{"train_lr": 6.328153925039405e-06, "train_min_lr": 6.328153925039405e-06, "train_loss": 3.7115134861893377, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.8702810922494302, "test_loss": 1.167566333185224, "test_acc1": 72.78800256469727, "test_acc5": 91.56800273162841, "epoch": 293, "n_parameters": 5717416}
{"train_lr": 4.8186317113912905e-06, "train_min_lr": 4.8186317113912905e-06, "train_loss": 3.6995124538930564, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.870861506423889, "test_loss": 1.1652020443888271, "test_acc1": 72.78800224243165, "test_acc5": 91.58000273162841, "epoch": 294, "n_parameters": 5717416}
...
{"train_lr": 1.2942618680829815e-06, "train_min_lr": 1.2942618680829815e-06, "train_loss": 3.7039048994580903, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.8682293318785154, "test_loss": 1.1661551656091915, "test_acc1": 72.76000192993165, "test_acc5": 91.56800231323243, "epoch": 298, "n_parameters": 5717416}
{"train_lr": 1.042153755247833e-06, "train_min_lr": 1.042153755247833e-06, "train_loss": 3.705186177713749, "train_weight_decay": 0.05000000000000049, "train_grad_norm": 0.8685083528741812, "test_loss": 1.1662813990431673, "test_acc1": 72.74400226898193, "test_acc5": 91.56600253082276, "epoch": 299, "n_parameters": 5717416}

The best acc of DeiT-Tiny is 72.8, we cannot reproduce the result of 73.3.

Furthermore, the best acc in 100 epoch and 200 epoch are 57.4 and 66.4, respectively.

Cannot achieve 68.1 and 71.8.

About low-pass filtering

I can implement low-pass filtering of the image through the Fourier transform. Could you please explain how to perform the inverse Fourier Transform to obtain the low-pass filtered image, as shown in the figure in your paper?
111

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.