Comments (24)
Is this from Vgg16 backbone? and do you change batch size?
from sigma.
yes it throws ÇUDA OUT OF MEMORY error so that i changed the batch size to 1
from sigma.
the backbone is R-50-FPN-RETINANET''
from sigma.
yes it throws ÇUDA OUT OF MEMORY error so I changed the batch size to 1
Since the cross-image graph-based message propagation (within batch) is necessary, batch size should be set at least 2. We tested batch size 2 and 4. Do you change the learning rate for bs=1?
from sigma.
I didn't change the learning rate but it still throws cuda out of memory error using batch size of 2
from sigma.
I didn't change the learning rate but it still throws cuda out of memory error using batch size of 2
We used 2080 Ti (12GB) for bs=2 and V100 (36GB) for bs=4, and never had a try for bs=1.
It is common practice to halve the learning rate if you halve the batch size. For now, you can try to halve the learning rate. We will further test the bs=1 if you still face such a problem. But we still don't recommend training with only bs=1.
from sigma.
at the beginning it starts well but then at some pint of the iteration it show that error. I will try reducing the learning rate.
from sigma.
'CUDA out of memory' even for learning rate of 0.0005
from sigma.
'CUDA out of memory' even for learning rate of 0.0005
It seems that your GPU memory is too small.,
Try to further reduce the number of sampled nodes by changing this in the YAML config file. (The number of sampled nodes could increase during the training.)
NUM_NODES_PER_LVL_SR: 50
NUM_NODES_PER_LVL_TG: 50
Reduce the node number as much as possible until the CUDA out of memory doesn't appear, although it may have some negative impact on the performance.
from sigma.
this is the gpu I am using. so can i reduce NUM_NODES_PER_LVL_SR and NUM_NODES_PER_LVL_TG to any number?
from sigma.
Actually, 8GP GPU is a little bit small for the detection tasks.
Sure, you can try any number of nodes. But you'd better not reduce too much, as shown in Table 4.
from sigma.
Okay and isn't there a checkpoint because it starts from the scratch every time i restarted it even though it did many iterations before
from sigma.
Okay and isn't there a checkpoint because it starts from the scratch every time i restarted it even though it did many iterations before
We automatically start to save the checkpoint if the validation results are larger than SOLVER.INITIAL_AP50 to save the desk space. You can change SOLVER.INITIAL_AP50 to 0 to save more checkpoints.
from sigma.
let me try it by applying your suggestions this issue will be open until the process finishes
from sigma.
thank you I will re-open when an issue is encountered.
from sigma.
Hi, I have reproduced your issue, and this issue should have been addressed in the latest commit.
Since your bs is too small (bs=1), there exists an extreme case in which there are only two nodes in the source domain and no nodes in the target domain. Then, SIGMA will split the source nodes into two parts to train the matching branch, leading to the wrong size of target nodes [256] instead of [num_node, 256].
We fix this bug by adding these lines, which directly jump out the middle head if there are not enough source nodes.
Add these lines. Then, you can try to keep the original learning rate to train faster, or it will take too long time to train the model with limited bs=1.
from sigma.
thank you I will re-open when an issue is encountered.
We have updated README about the small batch-size training for your convenience. ResNet 50 backbone always gives better results than VGG 16.
from sigma.
oh sorry for the late replay i see I'll check out the updates made. but now regarding the check point after more than 24 hours of training unfortunately there was a power interruptions and when i start the training it starts from the scratch and also it shows the same estimated remaining time as the original one even though it saved checkpoint each steps. here is the screenshot of it and also i showed the saved models in the GIF file. please kindly check it out.
from sigma.
Hi, that's okay since the framework will automatically load the latest checkpoint. You can directly ignore the INFO message since it is from EPM, which isn't used in our project. You can directly continue to train the model and set the warm-up iterations to 0. It seems to work properly now, if you face the previous issue again, you only need to add those lines mentioned above.
I recommend you to try changing the learning rate back to 0.0025 to train faster as I find your model converges too slowly with only bs=1. As the updated readme, you need to train double iterations if you halve the batch size. Usually, for bs=2 of resnet 50 (0.0025 lr), it can achieve 40+ mAP only using 10000 iterations.
from sigma.
Noted with thanks so I think I don't have t redownload the repo the update is on file 'graph_matching_head.py' just changing the learning rate to 0.0025 and batch size 2 so I can just replace graph_matching_head.py right?
from sigma.
Noted with thanks so I think I don't have t redownload the repo the update is on file 'graph_matching_head.py' just changing the learning rate to 0.0025 and batch size 2 so I can just replace graph_matching_head.py right?
Yes, you only need to replace graph_matching_head.py and change the BASE_LR in the YAML config file
from sigma.
Dear sir the ''çuda out of memory' problem still persists after ten thousands of iterations even though I applied the recommendation provided So i updated it to the original. is there any other recommendation please?
from sigma.
Dear sir the ''çuda out of memory' problem still persists after ten thousands of iterations even though I applied the recommendation provided So i updated it to the original. is there any other recommendation please?
Hi, maybe you can disable the one-to-one (o2o) matching by setting MODEL.MIDDLE_HEAD.GM.MATCHING_CFG 'none', which will save lots of CUDA memory. Please try this setting first, thanks!.
Besides, we have updated some solutions for limited GPU memory in the latest README. Kindly have a try.
from sigma.
Ok Thanks I will try it.
from sigma.
Related Issues (20)
- Compare with model EPM? HOT 3
- Unknown CUDA arch (8.6) or GPU not supported?
- Random seeds are used for training HOT 16
- Some Random Thoughts HOT 2
- Question about pseudo label and category mismatch. HOT 2
- 属性错误 HOT 2
- Sim10k's ImageSets HOT 2
- Visualization HOT 5
- source-only 训练,低mAP HOT 1
- RuntimeError: Not compiled with GPU support HOT 5
- How to tune hyperparameters for custom datasets? HOT 1
- Missing annotation files for Pascal Voc based settings HOT 3
- Question about VOC2Watercolor & Comic HOT 4
- yaml文件问题 HOT 3
- About iterative_test, I have test the traning procedure with iteration%100=0 and after 60000 start iteration%100=0 validation. Strange tihing that without iterative val the loss is different, whereas if with iterative val the loss is same. HOT 4
- T-SNE可视化 HOT 1
- source only
- 准确率和召回率的计算有接口吗 HOT 1
- A problem when running ‘python setup.py build develop’. HOT 2
- VOC HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from sigma.