Code Monkey home page Code Monkey logo

Comments (3)

qibinc avatar qibinc commented on July 27, 2024

Hi @jiangtanzju ,

TL;DR. It won't influence the performance.

We once had a set of experiments to try embedding generation with BatchNorm set in the train mode, which computes batch mean and variance on the fly, mitigating the discrepancy between the downstream graphs and the pretraining graphs. In that case, if the instances in the test dataset are not shuffled, it will cause data leaks if bs is set small. This is why we set it to be the dataset set in the first place. In case you want to try the same thing, you can ignore this.

As for the inference time, this line won't have too much effect. I ran time xx command twice with/without this line, the cpu time is:

  1. 314.11s system
  2. 312.81s system

from gcc.

jiangtann avatar jiangtann commented on July 27, 2024

OK, I see.

The speed of inference on my side is about ten times different. And I carefully observe via htop, I find when bs is the length of the dataset, only one dataloader num_woker works. When bs is the length of the dataset // 2, only 2 dataloader num_woker work. When bs is the length of the dataset // 4, only 4 dataloader num_worker work, though my num_worker=12.

Only when bs < dataset // num_worker, all num_worker work normally.

You can see when I set bs=length of the dataset // 8, only 8 num_worker are working, while some other num_worker are touching fish:
image

Anyway, thanks for your detailed reply ๐Ÿ˜‹ .

from gcc.

qibinc avatar qibinc commented on July 27, 2024

Hi @jiangtanzju ,

The computation of each dataloader is mainly on scipy.linalg.eigsh here https://github.com/THUDM/GCC/blob/master/gcc/datasets/data_util.py#L251.

It seems that in your setup, this function is not parallel, which leads to a single dataloader only utilizing 100% CPU. With MKL LAPACK installed, this function can be parallel. In that case, even if other dataloaders are touching fish, the one dataloader will utilize all of the CPUs so the time doesn't change so much in my setup. Anyway this shouldn't matter since you can simply increase the number of loaders by decreasing batch_size.

from gcc.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.