Code Monkey home page Code Monkey logo

Comments (4)

wymanCV avatar wymanCV commented on September 4, 2024

Thanks for your interest! Please refer to the following onedrive link.

https://portland-my.sharepoint.com/:u:/g/personal/wuyangli2-c_my_cityu_edu_hk/ESOgJbvystdDiGbMLiGnL50BvxxwSJ3LjR22yxo9-OdTOA?e=5cA2xY

from sigma.

Lybnn avatar Lybnn commented on September 4, 2024

Thank you very much, this seems to only log into your school email, I downloaded this dataset via another URL: https://download.openmmlab.com/pretrain/third_party/vgg16_caffe-292e1171.pth.
image
Built the environment, but there is a problem at fcos_core/modeling/rpn/fcos/inference.py under line 77:
Traceback (most recent call last):
File "tools/train_net_da.py", line 20, in
from fcos_core.modeling.detector import build_detection_model
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/init.py", line 2, in
from .detectors import build_detection_model
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/detectors.py", line 2, in
from .generalized_rcnn import GeneralizedRCNN
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/generalized_rcnn.py", line 11, in
from ..backbone import build_backbone
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/backbone/init.py", line 3, in
from . import fbnet
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/backbone/fbnet.py", line 14, in
from fcos_core.modeling.rpn import rpn
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/rpn.py", line 9, in
from fcos_core.modeling.rpn.fcos.fcos import build_fcos
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/fcos/fcos.py", line 5, in
from .inference import make_fcos_postprocessor
File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/fcos/inference.py", line 77
pre_nms_top_n = pre_nms_top_n.clamp(max=self.pre_nms_top_n)runtimeerror: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead
^
SyntaxError: invalid syntax
Have you encountered this problem and what should I do to fix it?Thanks!

from sigma.

wymanCV avatar wymanCV commented on September 4, 2024

Thank you very much, this seems to only log into your school email, I downloaded this dataset via another URL: https://download.openmmlab.com/pretrain/third_party/vgg16_caffe-292e1171.pth. image Built the environment, but there is a problem at fcos_core/modeling/rpn/fcos/inference.py under line 77: Traceback (most recent call last): File "tools/train_net_da.py", line 20, in from fcos_core.modeling.detector import build_detection_model File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/init.py", line 2, in from .detectors import build_detection_model File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/detectors.py", line 2, in from .generalized_rcnn import GeneralizedRCNN File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/detector/generalized_rcnn.py", line 11, in from ..backbone import build_backbone File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/backbone/init.py", line 3, in from . import fbnet File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/backbone/fbnet.py", line 14, in from fcos_core.modeling.rpn import rpn File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/rpn.py", line 9, in from fcos_core.modeling.rpn.fcos.fcos import build_fcos File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/fcos/fcos.py", line 5, in from .inference import make_fcos_postprocessor File "/media/ubuntu/9a32be42-b1fc-4942-91da-4a28a2388ede/zyb/SCAN/fcos_core/modeling/rpn/fcos/inference.py", line 77 pre_nms_top_n = pre_nms_top_n.clamp(max=self.pre_nms_top_n)runtimeerror: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead ^ SyntaxError: invalid syntax Have you encountered this problem and what should I do to fix it?Thanks!

Hi, sorry for the wrong link. I have corrected it as this.

It seems that the project was built successfully. Did you use the DAOD benchmark dataset or your custom one? Are you running SCAN or SIGMA? SIGMA is a much more robust version.

Finally, I haven't met such an issue before... I can try to help you if you can provide more details about your experiments.

from sigma.

Lybnn avatar Lybnn commented on September 4, 2024

Thanks for the reply! This is the cityscapes dataset run on SCAN. I will reproduce SIGMA and if I have any questions I will ask you again, thanks again for your excellent contribution.
I tried changing pre_nms_top_n = candidate_inds.view(N, -1).sum(1) to pre_nms_top_n = candidate_inds.reshape(N, -1).sum(1), and now it seems to work! Here is a preliminary run at cityscapes to cityscapes_foggy, is this correct?
2024-06-30 15:05:21,652 fcos_core.trainer INFO: Start training
/home/ubuntu/anaconda3/envs/SCAN/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
2024-06-30 15:06:22,992 fcos_core.trainer INFO: eta: 2 days, 20:08:05 iter: 20 loss_gs: 5.0374 (5.9095) node_loss_gs: 2.1283 (2.1133) act_loss_gs: 0.1318 (0.3350) loss_cls_gs: 0.6337 (0.6870) loss_reg_gs: 1.4703 (2.1026) loss_centerness_gs: 0.6661 (0.6716) loss_ds: 0.3457 (0.3460) zeros: 0.0000 (0.0000) loss_adv_P7_CON_ds: 0.0702 (0.0703) loss_adv_P6_CON_ds: 0.0696 (0.0696) loss_adv_P5_CON_ds: 0.0685 (0.0685) loss_adv_P4_CON_ds: 0.0692 (0.0692) loss_adv_P3_CON_ds: 0.0682 (0.0684) loss_dt: 0.3476 (0.3476) zero_gt: 0.0000 (0.0000) loss_adv_P7_CON_dt: 0.0685 (0.0684) loss_adv_P6_CON_dt: 0.0691 (0.0692) loss_adv_P5_CON_dt: 0.0702 (0.0702) loss_adv_P4_CON_dt: 0.0695 (0.0695) loss_adv_P3_CON_dt: 0.0704 (0.0703) time: 2.9972 (3.0668) data: 0.0077 (0.1001) lr_backbone: 0.000833 lr_middle_head: 0.000833 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 6198
2024-06-30 15:07:22,839 fcos_core.trainer INFO: eta: 2 days, 19:17:24 iter: 40 loss_gs: 4.0197 (4.9547) node_loss_gs: 1.6470 (1.8859) act_loss_gs: 0.0988 (0.2172) loss_cls_gs: 0.4741 (0.5811) loss_reg_gs: 1.0964 (1.6047) loss_centerness_gs: 0.6601 (0.6658) loss_ds: 0.3457 (0.3458) zeros: 0.0000 (0.0000) loss_adv_P7_CON_ds: 0.0703 (0.0703) loss_adv_P6_CON_ds: 0.0694 (0.0695) loss_adv_P5_CON_ds: 0.0685 (0.0685) loss_adv_P4_CON_ds: 0.0693 (0.0692) loss_adv_P3_CON_ds: 0.0681 (0.0683) loss_dt: 0.3479 (0.3477) zero_gt: 0.0000 (0.0000) loss_adv_P7_CON_dt: 0.0685 (0.0685) loss_adv_P6_CON_dt: 0.0693 (0.0692) loss_adv_P5_CON_dt: 0.0702 (0.0702) loss_adv_P4_CON_dt: 0.0694 (0.0694) loss_adv_P3_CON_dt: 0.0706 (0.0704) time: 3.0055 (3.0296) data: 0.0086 (0.0602) lr_backbone: 0.000833 lr_middle_head: 0.000833 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 6491
2024-06-30 15:08:22,805 fcos_core.trainer INFO: eta: 2 days, 19:02:31 iter: 60 loss_gs: 4.1649 (4.6570) node_loss_gs: 1.3220 (1.7088) act_loss_gs: 0.0711 (0.1687) loss_cls_gs: 0.9412 (0.7075) loss_reg_gs: 1.0037 (1.4089) loss_centerness_gs: 0.6574 (0.6631) loss_ds: 0.3455 (0.3457) zeros: 0.0000 (0.0000) loss_adv_P7_CON_ds: 0.0702 (0.0703) loss_adv_P6_CON_ds: 0.0692 (0.0694) loss_adv_P5_CON_ds: 0.0687 (0.0686) loss_adv_P4_CON_ds: 0.0695 (0.0693) loss_adv_P3_CON_ds: 0.0679 (0.0681) loss_dt: 0.3480 (0.3478) zero_gt: 0.0000 (0.0000) loss_adv_P7_CON_dt: 0.0685 (0.0685) loss_adv_P6_CON_dt: 0.0695 (0.0693) loss_adv_P5_CON_dt: 0.0700 (0.0701) loss_adv_P4_CON_dt: 0.0692 (0.0693) loss_adv_P3_CON_dt: 0.0708 (0.0706) time: 2.9881 (3.0192) data: 0.0089 (0.0467) lr_backbone: 0.000833 lr_middle_head: 0.000833 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 6491
2024-06-30 15:09:22,062 fcos_core.trainer INFO: eta: 2 days, 18:42:46 iter: 80 loss_gs: 3.9013 (4.4561) node_loss_gs: 1.2660 (1.6021) act_loss_gs: 0.0645 (0.1423) loss_cls_gs: 0.8427 (0.7446) loss_reg_gs: 0.9688 (1.3060) loss_centerness_gs: 0.6554 (0.6611) loss_ds: 0.3452 (0.3456) zeros: 0.0000 (0.0000) loss_adv_P7_CON_ds: 0.0700 (0.0702) loss_adv_P6_CON_ds: 0.0690 (0.0693) loss_adv_P5_CON_ds: 0.0687 (0.0686) loss_adv_P4_CON_ds: 0.0693 (0.0693) loss_adv_P3_CON_ds: 0.0682 (0.0681) loss_dt: 0.3482 (0.3479) zero_gt: 0.0000 (0.0000) loss_adv_P7_CON_dt: 0.0688 (0.0685) loss_adv_P6_CON_dt: 0.0697 (0.0694) loss_adv_P5_CON_dt: 0.0699 (0.0701) loss_adv_P4_CON_dt: 0.0693 (0.0693) loss_adv_P3_CON_dt: 0.0705 (0.0705) time: 2.9622 (3.0051) data: 0.0085 (0.0396) lr_backbone: 0.000833 lr_middle_head: 0.000833 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 6491
2024-06-30 15:10:21,933 fcos_core.trainer INFO: eta: 2 days, 18:38:41 iter: 100 loss_gs: 3.5796 (4.2845) node_loss_gs: 1.2510 (1.5350) act_loss_gs: 0.0591 (0.1266) loss_cls_gs: 0.6202 (0.7240) loss_reg_gs: 0.9625 (1.2387) loss_centerness_gs: 0.6580 (0.6602) loss_ds: 0.3446 (0.3454) zeros: 0.0000 (0.0000) loss_adv_P7_CON_ds: 0.0699 (0.0701) loss_adv_P6_CON_ds: 0.0688 (0.0692) loss_adv_P5_CON_ds: 0.0687 (0.0686) loss_adv_P4_CON_ds: 0.0689 (0.0692) loss_adv_P3_CON_ds: 0.0684 (0.0682) loss_dt: 0.3485 (0.3480) zero_gt: 0.0000 (0.0000) loss_adv_P7_CON_dt: 0.0688 (0.0686) loss_adv_P6_CON_dt: 0.0698 (0.0695) loss_adv_P5_CON_dt: 0.0699 (0.0701) loss_adv_P4_CON_dt: 0.0696 (0.0694) loss_adv_P3_CON_dt: 0.0702 (0.0705) time: 3.0099 (3.0028) data: 0.0077 (0.0355) lr_backbone: 0.000833 lr_middle_head: 0.000833 lr_fcos: 0.000833 lr_dis: 0.000833 max mem: 6491
2024-06-30 15:10:21,933 fcos_core.inference INFO: Start evaluation on ('cityscapes_foggy_val_cocostyle',) dataset(500 images).
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 125/125 [01:13<00:00, 1.70it/s]
2024-06-30 15:11:35,624 fcos_core.inference INFO: Preparing results for COCO format
2024-06-30 15:11:35,624 fcos_core.inference INFO: Preparing bbox results
2024-06-30 15:11:36,768 fcos_core.inference INFO: Evaluating predictions
Loading and preparing results...
DONE (t=1.27s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=12.31s).
Accumulating evaluation results...
DONE (t=0.59s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.001
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.003
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.009
Maximum f-measures for classes:
[0.0, 0.019636301545291557, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Score thresholds for classes (used in demos for visualization purposes):
[0.0, 0.23027010262012482, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
2024-06-30 15:11:52,272 fcos_core.inference INFO: OrderedDict([('bbox', OrderedDict([('AP', 5.6819832519004756e-05), ('AP50', 0.0003043063231258932), ('AP75', 2.225942018662298e-06), ('APs', 0.0003712871287128713), ('APm', 2.2652345072354306e-05), ('APl', 0.00036908724346504764)]))])

from sigma.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.