Code Monkey home page Code Monkey logo

Comments (24)

wymanCV avatar wymanCV commented on June 23, 2024

Is this from Vgg16 backbone? and do you change batch size?

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

yes it throws ÇUDA OUT OF MEMORY error so that i changed the batch size to 1

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

the backbone is R-50-FPN-RETINANET''

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

yes it throws ÇUDA OUT OF MEMORY error so I changed the batch size to 1

Since the cross-image graph-based message propagation (within batch) is necessary, batch size should be set at least 2. We tested batch size 2 and 4. Do you change the learning rate for bs=1?

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

I didn't change the learning rate but it still throws cuda out of memory error using batch size of 2

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

I didn't change the learning rate but it still throws cuda out of memory error using batch size of 2

We used 2080 Ti (12GB) for bs=2 and V100 (36GB) for bs=4, and never had a try for bs=1.

It is common practice to halve the learning rate if you halve the batch size. For now, you can try to halve the learning rate. We will further test the bs=1 if you still face such a problem. But we still don't recommend training with only bs=1.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

at the beginning it starts well but then at some pint of the iteration it show that error. I will try reducing the learning rate.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

'CUDA out of memory' even for learning rate of 0.0005

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

'CUDA out of memory' even for learning rate of 0.0005

It seems that your GPU memory is too small.,

Try to further reduce the number of sampled nodes by changing this in the YAML config file. (The number of sampled nodes could increase during the training.)

NUM_NODES_PER_LVL_SR: 50
NUM_NODES_PER_LVL_TG: 50

Reduce the node number as much as possible until the CUDA out of memory doesn't appear, although it may have some negative impact on the performance.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

this is the gpu I am using. so can i reduce NUM_NODES_PER_LVL_SR and NUM_NODES_PER_LVL_TG to any number?
NV

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Actually, 8GP GPU is a little bit small for the detection tasks.
Sure, you can try any number of nodes. But you'd better not reduce too much, as shown in Table 4.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

Okay and isn't there a checkpoint because it starts from the scratch every time i restarted it even though it did many iterations before

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Okay and isn't there a checkpoint because it starts from the scratch every time i restarted it even though it did many iterations before

We automatically start to save the checkpoint if the validation results are larger than SOLVER.INITIAL_AP50 to save the desk space. You can change SOLVER.INITIAL_AP50 to 0 to save more checkpoints.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

let me try it by applying your suggestions this issue will be open until the process finishes

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

thank you I will re-open when an issue is encountered.

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Hi, I have reproduced your issue, and this issue should have been addressed in the latest commit.

Since your bs is too small (bs=1), there exists an extreme case in which there are only two nodes in the source domain and no nodes in the target domain. Then, SIGMA will split the source nodes into two parts to train the matching branch, leading to the wrong size of target nodes [256] instead of [num_node, 256].

image

We fix this bug by adding these lines, which directly jump out the middle head if there are not enough source nodes.

image

Add these lines. Then, you can try to keep the original learning rate to train faster, or it will take too long time to train the model with limited bs=1.

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

thank you I will re-open when an issue is encountered.

We have updated README about the small batch-size training for your convenience. ResNet 50 backbone always gives better results than VGG 16.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

oh sorry for the late replay i see I'll check out the updates made. but now regarding the check point after more than 24 hours of training unfortunately there was a power interruptions and when i start the training it starts from the scratch and also it shows the same estimated remaining time as the original one even though it saved checkpoint each steps. here is the screenshot of it and also i showed the saved models in the GIF file. please kindly check it out.
chckpt
chckpt (1)
TEST

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Hi, that's okay since the framework will automatically load the latest checkpoint. You can directly ignore the INFO message since it is from EPM, which isn't used in our project. You can directly continue to train the model and set the warm-up iterations to 0. It seems to work properly now, if you face the previous issue again, you only need to add those lines mentioned above.

I recommend you to try changing the learning rate back to 0.0025 to train faster as I find your model converges too slowly with only bs=1. As the updated readme, you need to train double iterations if you halve the batch size. Usually, for bs=2 of resnet 50 (0.0025 lr), it can achieve 40+ mAP only using 10000 iterations.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

Noted with thanks so I think I don't have t redownload the repo the update is on file 'graph_matching_head.py' just changing the learning rate to 0.0025 and batch size 2 so I can just replace graph_matching_head.py right?

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Noted with thanks so I think I don't have t redownload the repo the update is on file 'graph_matching_head.py' just changing the learning rate to 0.0025 and batch size 2 so I can just replace graph_matching_head.py right?

Yes, you only need to replace graph_matching_head.py and change the BASE_LR in the YAML config file

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

Dear sir the ''çuda out of memory' problem still persists after ten thousands of iterations even though I applied the recommendation provided So i updated it to the original. is there any other recommendation please?

from sigma.

wymanCV avatar wymanCV commented on June 23, 2024

Dear sir the ''çuda out of memory' problem still persists after ten thousands of iterations even though I applied the recommendation provided So i updated it to the original. is there any other recommendation please?

Hi, maybe you can disable the one-to-one (o2o) matching by setting MODEL.MIDDLE_HEAD.GM.MATCHING_CFG 'none', which will save lots of CUDA memory. Please try this setting first, thanks!.

Besides, we have updated some solutions for limited GPU memory in the latest README. Kindly have a try.

from sigma.

tilahun12 avatar tilahun12 commented on June 23, 2024

Ok Thanks I will try it.

from sigma.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.