Comments (7)
Traceback (most recent call last):
File "demo_maxvqa.py", line 105, in
maxvqa = MaxVQA(text_tokens, embedding, text_encoder, share_ctx=True).cuda()
File "/ExplainableVQA-master/model/maxvqa.py", line 71, in init
self.text_feats = text_encoder(n_prompts.cuda(), self.tokenized_prompts)
File "/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/ExplainableVQA-master/model/maxvqa.py", line 28, in forward
x = self.transformer(x, attn_mask=self.attn_mask)
File "miniconda3/envs/tf_torch_btx/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 363, in forward
x = r(x, attn_mask=attn_mask)
File "lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 263, in forward
x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 250, in attention
return self.attn(
File "lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1205, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File lib/python3.8/site-packages/torch/nn/functional.py", line 5251, in multi_head_attention_forward
raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (32, 32).
I got this error, do u know why?
I reckon it could be related to the "batch_first" argument in a relatively newer version of torch. You can try to remove the two "permute" operations in TextEncoder's forward function at model/maxvqa.py:27 and 29
from explainablevqa.
+1 Did you end up find a way of resolving this issue and running the demo?
from explainablevqa.
Follow the instructions the repo gives for open_clip installation and you should not get this error. If you use pip install open-clip-torch
you would get this error. They modified OpenClip.
from explainablevqa.
For reference, yes the installation of open_clip was the issue, but it was caused by the sed
command used in the installation not working exactly as described by the documentation on macOS. Changing the command to
sed -i "" "92s/return x\[0\]/return x/" src/open_clip/modified_resnet.py
made this work for me (or you can just manually edit line 92 of modified_resnet.py to remove the square brackets).
from explainablevqa.
For reference, yes the installation of open_clip was the issue, but it was caused by the
sed
command used in the installation not working exactly as described by the documentation on macOS. Changing the command tosed -i "" "92s/return x\[0\]/return x/" src/open_clip/modified_resnet.py
made this work for me (or you can just manually edit line 92 of modified_resnet.py to remove the square brackets).
Hello, why did I remove the square brackets on line 92 and still report shape error?
git clone https://github.com/mlfoundations/open_clip.git
cd open_clip
sed - i'92s/return x \ [0 ]/return x/'src/open_clip/modified_resnet.py
pip install - e.
Did you follow the steps in the README to install open_clip? Is there any other solution to this shape error problem?
from explainablevqa.
Not sure if this is just a pasting error in your question but if you're using the above command exactly as you've written there the sed
command is written incorrectly, you either want:
sed -i "92s/return x\[0\]/return x/" src/open_clip/modified_resnet.py
(original)
or
sed -i "" "92s/return x\[0\]/return x/" src/open_clip/modified_resnet.py
(what I laid out above for macos)
then
pip install -e .
from explainablevqa.
Traceback (most recent call last):
File "demo_maxvqa.py", line 105, in
maxvqa = MaxVQA(text_tokens, embedding, text_encoder, share_ctx=True).cuda()
File "/ExplainableVQA-master/model/maxvqa.py", line 71, in init
self.text_feats = text_encoder(n_prompts.cuda(), self.tokenized_prompts)
File "/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/ExplainableVQA-master/model/maxvqa.py", line 28, in forward
x = self.transformer(x, attn_mask=self.attn_mask)
File "miniconda3/envs/tf_torch_btx/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 363, in forward
x = r(x, attn_mask=attn_mask)
File "lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 263, in forward
x = q_x + self.ls_1(self.attention(q_x=self.ln_1(q_x), k_x=k_x, v_x=v_x, attn_mask=attn_mask))
File "ExplainableVQA/open_clip/src/open_clip/transformer.py", line 250, in attention
return self.attn(
File "lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "lib/python3.8/site-packages/torch/nn/modules/activation.py", line 1205, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File lib/python3.8/site-packages/torch/nn/functional.py", line 5251, in multi_head_attention_forward
raise RuntimeError(f"The shape of the 2D attn_mask is {attn_mask.shape}, but should be {correct_2d_size}.")
RuntimeError: The shape of the 2D attn_mask is torch.Size([77, 77]), but should be (32, 32).
I got this error, do u know why?
from explainablevqa.
Related Issues (11)
- Figure 7 in the paper - local quality maps
- Same video twice with different results
- Higher Bitrate Videos Scoring Lower?
- Open_clip import error HOT 2
- About MaxWell_val.csv HOT 6
- Maxwell database HOT 4
- 关于plcc_loss函数的问题
- Hi, why are the scores in the txt file located under examplar_data_labels/MaxWell inconsistent with those in MaxWell_train/val.csv, and the latter does not provide a corresponding video ID
- Question about the Maxwell_train and val dataset HOT 1
- ExplainableVQA/demo_maxvqa.py
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from explainablevqa.