Comments (14)
Holy cow. How did this pass muster? I don't have access to my cluster now but 13.6 gigs for a ResNet-26 is absolutely ridiculous. What's the use of such costly models?
from stand-alone-self-attention.
I have the same problem ResNet-26 on CIFAR-10. Running an RTX 2070 with 8gb graphics card RAM, I get the message:
Tried to allocate 308.00 MiB (GPU 0; 8.00 GiB total capacity; 5.08 GiB already allocated; 266.22 MiB free; 24.83 MiB cached)
This seems really odd. I would also like to know what hardware this was created on.
from stand-alone-self-attention.
I have met the same issue, the memory seems never enough.
from stand-alone-self-attention.
It uses about 13.00 GiB so if your GPU memory is less than that, you may get excessive memory usage problem. Maybe you should reduce the batch size to resolve this.
from stand-alone-self-attention.
13 GB for a ResNet-26?
from stand-alone-self-attention.
I don't know why it is so but this is how it looks like at the moment. Check the GPU Process 1
from stand-alone-self-attention.
I agree with you. This is very costly.
from stand-alone-self-attention.
yes, this is too costly and slow to replace Convs.
from stand-alone-self-attention.
This is ridiculous...I am using p2.16xlarge with 16 GPU and the CUDA memory is still not enough?
from stand-alone-self-attention.
Have you all tried running the experiments with different batch sizes? I was able to run the default CIFAR10 experiments with a batch size of 8 using an RTX 2060. Total memory consumption was under 5 GB. On a more powerful machine with 32 GB V100 cards, I was able to run the default experiment configuration for CIFAR10 while using ~15 GB.
from stand-alone-self-attention.
Have you all tried running the experiments with different batch sizes? I was able to run the default CIFAR10 experiments with a batch size of 8 using an RTX 2060. Total memory consumption was under 5 GB. On a more powerful machine with 32 GB V100 cards, I was able to run the default experiment configuration for CIFAR10 while using ~15 GB.
@iyaja
Hi Ajay, thank you a lot for your reply!
I am running it on the ChestXray dataset. It doesn't work with even a batch size of 1.
Could I ask how did you change the batch size? I do it via --batch-size=1
.
What is weird is that the RuntimeError always stays the same no matter how small the batch size I set.
Setting batch size to 32, it says "CUDA out of memory. tried to allocate 25 MB"
Setting batch size to 1, it still says "CUDA out of memory. tried to allocate 25 MB".
from stand-alone-self-attention.
Have you all tried running the experiments with different batch sizes? I was able to run the default CIFAR10 experiments with a batch size of 8 using an RTX 2060. Total memory consumption was under 5 GB. On a more powerful machine with 32 GB V100 cards, I was able to run the default experiment configuration for CIFAR10 while using ~15 GB.
@iyaja
Hi Ajay, thank you a lot for your reply!
I am running it on the ChestXray dataset. It doesn't work with even a batch size of 1.
Could I ask how did you change the batch size? I do it via--batch-size=1
.
What is weird is that the RuntimeError always stays the same no matter how small the batch size I set.
Setting batch size to 32, it says "CUDA out of memory. tried to allocate 25 MB"
Setting batch size to 1, it still says "CUDA out of memory. tried to allocate 25 MB".
I also have no idea why it shows gpu_devices: None and gpu: None....Shown in my last image, it seems that all 16 GPUs are occupied.
Another weird thing is... When I was using 8 GPUs, it says CUDA tried to allocate 47MB. When I changed it to 16 GPUs, it says CUDA tried to allocate 25MB....Why increasing GPU just help that little...
from stand-alone-self-attention.
I am trying to use AttentionConv on another model and I got a small Params size but got a super large Forward/backward pass size at the same time.
Why is Forward/backward pass size so large?
from stand-alone-self-attention.
I am trying to use AttentionConv on another model and I got a small Params size but got a super large Forward/backward pass size at the same time. Why is Forward/backward pass size so large?
because it use unfold, it will bring huge gpu memory cost. The param is small... but intermediate featuremaps is very big.
from stand-alone-self-attention.
Related Issues (20)
- The wrong imp of the inner-product operation HOT 3
- problem with unfold HOT 1
- Does not work when out_channels is not even
- Question about einsum.
- how about replacing einsum with normal multiplication
- Train with IMAGENET HOT 3
- matrix multiplication instead of scalar dot product HOT 4
- Problems about groups HOT 4
- Error loading pretrained model HOT 1
- Can anyone train resnet50 successfully without NaN HOT 3
- Has anyone tried changing the batch size HOT 1
- Add `sum(dim=2)` for dot-product HOT 1
- A question about relative position embeddings HOT 1
- Stand alone self attention combine with CycleGAN
- Loss is NaN HOT 2
- Large memory consumption HOT 2
- v_out = torch.cat((v_out_h + self.rel_h, v_out_w + self.rel_w), dim=1) HOT 2
- How to calculate the relative positional embeddings from a row offset and column offset? HOT 3
- Is the 89% acc reported in the readme consisitent with the paper? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stand-alone-self-attention.