Code Monkey home page Code Monkey logo

faster-pytorch-training's Introduction

Faster Training

Single-machine multi-GPU

ENV

{
    "Python":"3.8.10",
    "torch":"1.8.1",
    "torchvision":"0.9.1",
    "dali": "1.2.0",
    "CUDA":"11.1",
    "cuDNN":8005,
    "GPU":{
        "#0":{
            "name":"Quadro RTX 6000",
            "memory":"23.65GB"
        },
        "#1":{
            "name":"Quadro RTX 6000",
            "memory":"23.65GB"
        }
    },
    "Platform":{
        "system":"Linux",
        "node":"4029GP-TRT",
        "version":"#83~18.04.1-Ubuntu SMP Tue May 11 16:01:00 UTC 2021",
        "machine":"x86_64",
        "processor":"x86_64"
    }
}

Model Running

Batch size: 512, conv layers: 11, epochs: 5

Baseline : 276.980s

+cudnn_benchmark +AMP +cudnn_benchmark +AMP
DP 163.740 104.807 74.948 73.862
DDP 142.497 102.535 67.095 72.998
  • DP: torch.nn.DataParallel
  • AMP: torch.cuda.amp
  • DDP: torch.nn.parallel.DistributedDataParallel
  • cudnn_benchmark: torch.backends.cudnn.benchmark = True
  • pin_memory=True
  • non_blocking=True
  • optimizer.zero_grad(set_to_none=True)

Usage

# $1 is the epochs
./running.sh 5

# Or run the commands in the script directly.

Data Loading

Prepare

Drop caches for i/o benchmark test.

sync

# To free pagecache:
echo 1 > /proc/sys/vm/drop_caches
# To free reclaimable slab objects (includes dentries and inodes):
echo 2 > /proc/sys/vm/drop_caches
#To free slab objects and pagecache:
echo 3 > /proc/sys/vm/drop_caches

Data Loading Time

Batch size: 256/2, workers: 8 x 2
Bottleneck +DALI/CPU Bottleneck +DALI/GPU Bottleneck
HDD ~25M/s IO ~40M/s IO ~40M/s IO
SSD ~230M/s CPU ~500M/s CPU ~600M/s IO

Usage

# $1 is the script, $2 is the imagenet dataset path.
./loading.sh loading_faster.py '/datasets/ILSVRC2012/'

# Or run the commands in the script directly.

Downscale ImageNet Dataset (for validating ideas quickly)

The average resolution of ImageNet images is 469x387, but they are usually cropped to 256x256 or 224x224 in your image preprocessing step. So we could speed up reading by downscaling the image size. Especially, the entire dataset can be loaded into memory.

# N: the max size of smaller edge
python resize_imagenet.py --src </path/to/imagenet> --dst </path/to/imagenet/resized> --max-size N

Training with smaller size

As reported in Fixing the train-test resolution discrepancy, you can use smaller image size when training models.

faster-pytorch-training's People

Contributors

ffiirree avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.