yuweihao / kern Goto Github PK
View Code? Open in Web Editor NEWCode for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)
License: MIT License
Code for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)
License: MIT License
Hello, I encountered the following problem in the process of reproducing the code. How can you solve this problem? Thank you.
Traceback (most recent call last):
File "models/train_rels.py", line 17, in
from lib.evaluation.sg_eval import BasicSceneGraphEvaluator, calculate_mR_from_evaluator_list, eval_entry
ImportError: cannot import name 'calculate_mR_from_evaluator_list'
@yuweihao I was reading your paper (KERN) and wanted to make sure that there is no mistake in equation 6. You explain it as: all correlated output feature vectors are aggregated to predict the class label but you have also used the hidden state of the last class i.e. h_iC, I am confused. Could you please clarify it.
Thanks.
Hello, Thanks for this awesome repo. While running ./scripts/eval_kern_sgdet.sh
, I am getting the following error:
with torch.no_grad():
instead.with torch.no_grad():
instead.Any help on how to solve this issue?
Thank you!
@yuweihao I want to know if the final result of this code is the scene map of the picture. Can this scene map be visualized? Will you input a picture and output the corresponding scene map? Thank you.
您好,我在运行“bash ./scripts/train_kern_predcls.sh”时报错如下:
AttributeError: module 'lib.fpn.roi_align._ext.roi_align' has no attribute 'roi_align_forward_cuda'
请问能否解答一下,确实没有看到定义roi_align_forward_cuda,谢谢!
Hello,
Thank you for your very useful code! If I want the scene graph generated on an image not in the Visual Genome dataset, I believe I have to make it a "Blob" object (dataloaders.blob.Blob object), is that correct and use it in the sgdet mode? Also, if I have to use the visualize_sgcls, it looks like I have to add Ground Truth information into the class info. I am assuming that I can set the ground truth equal to null (is this assumption correct?). Do you have any pointers for getting the novel image into a Blob object (if that is the best way of doing it?).
Thank you very much for any help!
Hi,
Is there any solution to use more than one gpu for training relation classification/detection?
Thanks!
Thank you so much for releasing this repository looks awesome!
Quick question how much time does it take you to train the graph classification/detection? let's say time per epoch?
Thanks!
I am getting the following error trace when I execute CUDA_VISIBLE_DEVICES=0 ./scripts/eval_kern_predcls.sh
save_rel_recall : results/kern_rel_recall_predcls.pkl Unexpected key ggnn_obj_reason.obj_proj.weight in state_dict with size torch.Size([512, 4096]) Unexpected key ggnn_obj_reason.obj_proj.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq3_w.weight in state_dict with size torch.Size([512, 1024]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq3_w.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq3_u.weight in state_dict with size torch.Size([512, 512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq3_u.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq4_w.weight in state_dict with size torch.Size([512, 1024]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq4_w.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq4_u.weight in state_dict with size torch.Size([512, 512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq4_u.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq5_w.weight in state_dict with size torch.Size([512, 1024]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq5_w.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq5_u.weight in state_dict with size torch.Size([512, 512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_eq5_u.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_output.weight in state_dict with size torch.Size([512, 1024]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_output.bias in state_dict with size torch.Size([512]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_obj_cls.weight in state_dict with size torch.Size([151, 77312]) Unexpected key ggnn_obj_reason.ggnn_obj.fc_obj_cls.bias in state_dict with size torch.Size([151]) 0%| | 0/26446 [00:00<?, ?it/s]cudaCheckError() failed : no kernel image is available for execution on the device
Hi @yuweihao , Sorry that I haven't read the code, but when I read the paper I have some questions about the input of the graph node.
I am confused about it and not sure how you make the input. Could you give me some advice? Thanks very much!
This two functions are missing calculate_mR_from_evaluator_list
, eval_entry
when I executed eval_rels.py
and faced ImportError
in this line. Could please check it for me?
Btw, for the step 5 in SETUP section in README.MD, ./scripts/refine_for_detection.sh
is not there in your directory and it seems exists only in the original neural-motifs repo. I guess you have changed it to train_kern_sgdet.py
?
Hello,
I am trying to print out the predicted relations on evaluation images here:
https://github.com/yuweihao/KERN/blob/master/models/eval_rels.py#L70
To my surprise, the relations (rels_i) are tuples of size 2 (only including objects not relations) while gt_relations are size 3 (as expected). Can you help me on that?
In Step 5
Train scene graph classification: run CUDA_VISIBLE_DEVICES=YOUR_GPU_NUM ./scripts/train_kern_predcls.sh.
a mistake? run ./scripts/train_kern_sgcls.sh ?
ImportError: /home/KERN/lib/fpn/nms/_ext/nms/_nms.so: undefined symbol: __cudaPopCallConfiguration
please give suggestion how to resolve this issue.
您好,我在VG预训练detector时遇到这样的错误:
from dataloaders.mscoco import CocoDetection,CocoDataLoader
ModuleNotFoundError: No module named 'dataloaders
Hi,
line 32 of generate_knowledge.py mat[gt_classes[i], gt_classes[j]] += 1
Should it be mat[gt_classes_list[i], gt_classes_list[j]] += 1
because there are repeated labels in gt_classes
I can't open google drive,is there any other path to down load checkpoint such as baidu desk. I am eager for your reply.Thank you a lot.
I was wondering if you support inference on custom image. For example, if I wanted to generate scene-graph using your pre-trained models from a custom image not in visual genome dataset, would I be able to do that.
While training the pretrain VG detection with the ./scripts/pretrain-detector.sh command, there was an error: No module named 'dataloaders.mscoco' . what should I do?
@yuweihao Do you have any idea why I am getting this error. I tried to run the code for custom data and made some changes to run on CPU only and getting this error in nms.py file in nms_apply() function.
Also please let me know the desired changes to run your code on CPU Only
Thank you.
Hi there,
Is there any code to visualize detection and your model result?
Thanks!
I am getting so many errors at different points for different torch versions (0.3.0, 0.4.0 or >1.0). Just wanted to share:
pip install torch==0.4.1 torchfile==0.1.0 torchvision==0.2.0
This solved the pytorch version issues.
in object_detector.py
if len(dets) == 0:
print("nothing was detected", flush=True)
return None
and in kern_model.py
ValueError("heck")
who meet those problem? and any ?
Thank you for your excellent codes.
I want to know what exact meaning of rm and od. Such as
return Result(
od_obj_dists=od_obj_dists,
rm_obj_dists=obj_dists,
obj_scores=nms_scores,
obj_preds=nms_preds,
obj_fmap=obj_fmap,
od_box_deltas=od_box_deltas,
rm_box_deltas=box_deltas,
od_box_targets=bbox_targets,
rm_box_targets=bbox_targets,
od_box_priors=od_box_priors,
rm_box_priors=box_priors,
boxes_assigned=nms_boxes_assign,
boxes_all=nms_boxes,
od_obj_labels=obj_labels,
rm_obj_labels=rm_obj_labels,
rpn_scores=rpn_scores,
rpn_box_deltas=rpn_box_deltas,
rel_labels=rel_labels,
im_inds=im_inds,
fmap=fmap if return_fmap else None,
)
in lib object_detector.py.
What's more, why I always get ''RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58'', though I had change batch_size and num_workers to 1 and mine gpu has 16G memory space.
Waiting for your reply,thank you once again.
I installed the exact versions of pytorch as required and run into the same problem as in rowanz/neural-motifs#2. However, after changing the make options in cuda files (roi_align
and nms
) to be /usr/local/cuda/bin/nvcc -c -o file.cu.o file.cu --compiler-options -fPIC -gencode arch=compute_35,code=sm_35
(My GPU is Tesla K40c, I think it's compute capability is 3.5), I still got the same error. Do you have any idea how I can repair it?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.