Code Monkey home page Code Monkey logo

transfiner's People

Contributors

lkeab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

transfiner's Issues

ImportError: cannot import name 'Caffe2Tracer' from 'detectron2.export'

Hey guys!

i'm using this command on export_model.py in order to save a torchscript model:

python3 tools/deploy/export_model.py --config-file configs/Base-RetinaNet.yaml --format torchscript --output test

But I get this error:

File "/transfiner-main/tools/deploy/export_model.py", line 14, in
from detectron2.export import (
ImportError: cannot import name 'Caffe2Tracer' from 'detectron2.export' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/detectron2/export/init.py)

I'm using pytorch v2.

do you have any idea how to fix this issue I got?

thank you,

Paul

Strange visualization result.

When I executed the visualization command
python3 demo/demo.py --config-file configs/transfiner/mask_rcnn_R_50_FPN_3x.yaml ----input 'demo/sample_imgs/000000018737.jpg' --opts MODEL.WEIGHTS ./pretrained_model/output_3x_transfiner_r50.pth
The result of segmenting the picture 000000018737.jpg in the demo is strange, why is this?
My pytorch version is 1.10, is it a problem caused by the pytorch version?

000000018737.jpg
000000018737
000000321214.jpg
000000321214
000000132408.jpg
000000132408

EasyInstallDeprecationWarning: easy_install command is deprecated.

:~/Downloads/cocoapi-master/PythonAPI$ python setup.py build_ext install
running build_ext
skipping 'pycocotools/_mask.c' Cython extension (up-to-date)
running install
/home/zhujie01/miniconda3/envs/transfiner/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
/home/zhujie01/miniconda3/envs/transfiner/lib/python3.7/site-packages/setuptools/command/easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
EasyInstallDeprecationWarning,
running bdist_egg
running egg_info
error: [Errno 13] Permission denied

install way

I want know this project install way the same as detectron2 in windows?

got fvcore.common.checkpoint WARNING

Thanks for your great contribution!
Except for changing the dataset, everything else is in accordance with the repo guidance,but I get this warning when training with my own coco format dataset:fvcore.common.checkpoint WARNING: Some model parameters or buffers are not found in the checkpoint:
�[34mbackbone.fpn_lateral2.{bias, weight}�[0m
�[34mbackbone.fpn_lateral3.{bias, weight}�[0m
�[34mbackbone.fpn_lateral4.{bias, weight}�[0m
�[34mbackbone.fpn_lateral5.{bias, weight}�[0m
�[34mbackbone.fpn_output2.{bias, weight}�[0m
�[34mbackbone.fpn_output3.{bias, weight}�[0m
�[34mbackbone.fpn_output4.{bias, weight}�[0m
�[34mbackbone.fpn_output5.{bias, weight}�[0m
�[34mproposal_generator.rpn_head.anchor_deltas.{bias, weight}�[0m
�[34mproposal_generator.rpn_head.conv.{bias, weight}�[0m
�[34mproposal_generator.rpn_head.objectness_logits.{bias, weight}�[0m
�[34mroi_heads.box_head.fc1.{bias, weight}�[0m
�[34mroi_heads.box_head.fc2.{bias, weight}�[0m
......
what caused this?

RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast

File "/export/software/anacondamini/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/export/segment/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 537, in forward
x, x_uncertain, x_bo, x_hr, x_hr_l, x_hr_ll, x_c, x_p2_s, encoder, instances, self.vis_period)
File "/export/segment/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 296, in mask_rcnn_loss
pred_mask_logits_uncertain, gt_masks_uncertain, pred_mask_logits_uncertain.shape[0]) + F.binary_cross_entropy(pred_mask_logits_uncertain, gt_masks_uncertain, reduction="mean")
File "/export/software/anacondamini/lib/python3.7/site-packages/torch/nn/functional.py", line 2526, in binary_cross_entropy
input, target, weight, reduction_enum)

RuntimeError: torch.nn.functional.binary_cross_entropy and torch.nn.BCELoss are unsafe to autocast.
Many models use a sigmoid layer right before the binary cross entropy layer.
In this case, combine the two layers using torch.nn.functional.binary_cross_entropy_with_logits
or torch.nn.BCEWithLogitsLoss. binary_cross_entropy_with_logits and BCEWithLogits are
safe to autocast.

TypeError: iteration over a 0-d tensor

After trying to debug for hours, I found that this error is causing because only 1 value is returned instead of 5 values from this code:

File location:

detectron2/modeling/roi_heads/mask_head.py

In the line 186, if the length of instances is 1 and in the line 189, if if len(instances_per_image) == 0 statement is True, then it will skip the whole for-loop.

Then, it will reach at line 237, which will become True and enters into this block. In line 238, instead of returning 5 loss values, there is only 1 value returned which caused the main error. I replaced the line as :

return torch.tensor(0), torch.tensor(0), torch.tensor(0), torch.tensor(0), torch.tensor(0)

It temporarily solved the error, but I wanted to know what should be ideal value of losses that should be returned.?
@lkeab

input images can work very well by "--input", but web camera can`t work by “--webcam ”

i run python demo/demo.py --config-file configs/transfiner/mask_rcnn_R_50_FPN_1x.yaml --webcam --opts MODEL.WEIGHTS ./pretrained_model/output_1x_transfiner_r50.pth , but many errors occurred.

[07/20 21:26:09 fvcore.common.checkpoint]: [Checkpointer] Loading from ./pretrained_model/output_1x_transfiner_r50.pth ...
0it [00:00, ?it/s]C:\software\Miniconda3\envs\transfiner\lib\site-packages\detectron2-0.5-py3.7-win-amd64.egg\detectron2\modeling\roi_heads\fast_rcnn.py:154: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at ..\torch\csrc\utils\python_arg_parser.cpp:882.)
filter_inds = filter_mask.nonzero()
C:\software\Miniconda3\envs\transfiner\lib\site-packages\torch\nn\functional.py:3063: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
C:\software\Miniconda3\envs\transfiner\lib\site-packages\detectron2-0.5-py3.7-win-amd64.egg\detectron2\utils\video_visualizer.py:190: DeprecationWarning: np.bool is a deprecated alias for the builtin bool. To silence this warning, use bool by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use np.bool_ here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
is_crowd = np.zeros((len(instances),), dtype=np.bool)
0it [00:04, ?it/s]
Traceback (most recent call last):
File "demo/demo.py", line 143, in
for vis in tqdm.tqdm(demo.run_on_video(cam)):
File "C:\software\Miniconda3\envs\transfiner\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "C:\Users\hanji\Downloads\transfiner-main\demo\predictor.py", line 129, in run_on_video
yield process_predictions(frame, self.predictor(frame))
File "C:\Users\hanji\Downloads\transfiner-main\demo\predictor.py", line 98, in process_predictions
vis_frame = video_visualizer.draw_instance_predictions(frame, predictions)
File "C:\software\Miniconda3\envs\transfiner\lib\site-packages\detectron2-0.5-py3.7-win-amd64.egg\detectron2\utils\video_visualizer.py", line 88, in draw_instance_predictions
colors = self._assign_colors(detected)
File "C:\software\Miniconda3\envs\transfiner\lib\site-packages\detectron2-0.5-py3.7-win-amd64.egg\detectron2\utils\video_visualizer.py", line 233, in _assign_colors
inst.color = random_color(rgb=True, maximum=1)
TypeError: random_color() missing 1 required positional argument: 'ii'

after fix random_color() , other errors still occurred.

The out of memory problem

Hi, Thanks for your work. I am unfamiliar with detectron2, henceing this issue.
I use the same GPU - NVIDIA RTX 2080 Ti (only one piece). However, when I tried the training, there always have the out of memory problem, so I tried to change the batch size to 1, the problem still have.
until: use MIN_SIZE_TRAIN: (100,), MAX_SIZE_TRAIN: 200
The training can complete.

So I guess there's something wrong here, but I want to know why and how to fix it.

For example:
dataset:coco

The changed config:(mask_rcnn_R_50_FPN_1x_4gpu_transfiner.yaml -- Base-RCNN-FPN-4gpu.yaml)
SOLVER:
IMS_PER_BATCH: 1 # 8 # 16
BASE_LR: 0.0025 # 0.02
STEPS: (60000, 80000)
MAX_ITER: 90000
INPUT:
#MIN_SIZE_TRAIN: (640, 672, 704, 736, 768, 800)
MIN_SIZE_TRAIN: (200,)
MAX_SIZE_TRAIN: 300
DATALOADER:
NUM_WORKERS: 1

The log:
[04/27 13:03:34 d2.utils.events]: eta: 4:00:48 iter: 319 total_loss: 2.127 loss_cls: 0.1886 loss_box_reg: 0.1425 loss_mask: 0.3233 loss_mask_uncertain: 0.6131 loss_mask_refine: 0.4374 loss_semantic: 0.08179 loss_rpn_cls: 0.1416 loss_rpn_loc: 0.04266 time: 0.1666 data_time: 0.0013 lr: 0.0007992 max_mem: 8277M
[04/27 13:03:39 d2.utils.events]: eta: 4:01:09 iter: 339 total_loss: 2.429 loss_cls: 0.1466 loss_box_reg: 0.1322 loss_mask: 0.3433 loss_mask_uncertain: 0.588 loss_mask_refine: 0.4261 loss_semantic: 0.1083 loss_rpn_cls: 0.1618 loss_rpn_loc: 0.1142 time: 0.1714 data_time: 0.0011 lr: 0.00084915 max_mem: 8584M
[04/27 13:03:45 d2.utils.events]: eta: 4:03:05 iter: 359 total_loss: 1.859 loss_cls: 0.1425 loss_box_reg: 0.136 loss_mask: 0.2608 loss_mask_uncertain: 0.5863 loss_mask_refine: 0.408 loss_semantic: 0.08011 loss_rpn_cls: 0.131 loss_rpn_loc: 0.02407 time: 0.1784 data_time: 0.0012 lr: 0.0008991 max_mem: 8584M
[04/27 13:03:52 d2.utils.events]: eta: 4:04:25 iter: 379 total_loss: 2.212 loss_cls: 0.2625 loss_box_reg: 0.1868 loss_mask: 0.2859 loss_mask_uncertain: 0.576 loss_mask_refine: 0.4269 loss_semantic: 0.1137 loss_rpn_cls: 0.1371 loss_rpn_loc: 0.07165 time: 0.1867 data_time: 0.0014 lr: 0.00094905 max_mem: 8584M
ERROR [04/27 13:03:54 d2.engine.train_loop]: Exception during training:
Traceback (most recent call last):
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/train_loop.py", line 149, in train
self.run_step()
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/defaults.py", line 493, in run_step
self._trainer.run_step()
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/train_loop.py", line 273, in run_step
loss_dict = self.model(data)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/meta_arch/rcnn.py", line 172, in forward
_, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 521, in forward
losses.update(self._forward_mask(features, proposals))
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 677, in _forward_mask
return self.mask_head(features, instances)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 656, in forward
x, x_uncertain, x_hr, x_hr_l, x_hr_ll, x_c, x_p2_s, encoder, instances, self.vis_period)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 584, in mask_rcnn_loss
select_box_feats_cat, select_box_feats_cat_pos).permute(1, 2, 0).unsqueeze(-1)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 1056, in forward
output = layer(output, pos) #encoder
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 1031, in forward
src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/functional.py", line 1076, in dropout
return VF.dropout(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 0; 10.76 GiB total capacity; 8.30 GiB already allocated; 218.69 MiB free; 8.64 GiB reserved in total by PyTorch)
[04/27 13:03:54 d2.engine.hooks]: Overall training speed: 384 iterations in 0:01:13 (0.1903 s / it)
[04/27 13:03:54 d2.engine.hooks]: Total training time: 0:01:13 (0:00:00 on hooks)
[04/27 13:03:54 d2.utils.events]: eta: 4:05:02 iter: 386 total_loss: 2.723 loss_cls: 0.2625 loss_box_reg: 0.247 loss_mask: 0.305 loss_mask_uncertain: 0.5775 loss_mask_refine: 0.4294 loss_semantic: 0.1251 loss_rpn_cls: 0.179 loss_rpn_loc: 0.1717 time: 0.1897 data_time: 0.0014 lr: 0.00096404 max_mem: 8584M
Traceback (most recent call last):
File "tools/train_net.py", line 169, in
args=(args,),
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/launch.py", line 82, in launch
main_func(*args)
File "tools/train_net.py", line 157, in main
return trainer.train()
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/defaults.py", line 483, in train
super().train(self.start_iter, self.max_iter)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/train_loop.py", line 149, in train
self.run_step()
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/defaults.py", line 493, in run_step
self._trainer.run_step()
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/engine/train_loop.py", line 273, in run_step
loss_dict = self.model(data)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/meta_arch/rcnn.py", line 172, in forward
_, detector_losses = self.roi_heads(images, features, proposals, gt_instances)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 521, in forward
losses.update(self._forward_mask(features, proposals))
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 677, in _forward_mask
return self.mask_head(features, instances)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 656, in forward
x, x_uncertain, x_hr, x_hr_l, x_hr_ll, x_c, x_p2_s, encoder, instances, self.vis_period)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 584, in mask_rcnn_loss
select_box_feats_cat, select_box_feats_cat_pos).permute(1, 2, 0).unsqueeze(-1)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 1056, in forward
output = layer(output, pos) #encoder
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/media/huang/5474B47974B45F82/zhk/transfiner/detectron2/modeling/roi_heads/mask_head.py", line 1031, in forward
src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/modules/dropout.py", line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File "/home/huang/anaconda3/envs/transfier/lib/python3.7/site-packages/torch/nn/functional.py", line 1076, in dropout
return VF.dropout(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: CUDA out of memory. Tried to allocate 194.00 MiB (GPU 0; 10.76 GiB total capacity; 8.30 GiB already allocated; 218.69 MiB free; 8.64 GiB reserved in total by PyTorch)

Looking forward to your reply. :)

compile fail

Hi everyone,

i'm on windows 11, and i'm having this error when i follow the Step-by-step guide (after cd transfiner/
python3 setup.py build develop):

FAILED: C:/Users/Shadow/transfiner/build/temp.win-amd64-cpython-37/Release/Users/Shadow/transfiner/detectron2/layers/csrc/nms_rotated/nms_rotated_cuda.obj
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -DWITH_CUDA -IC:\Users\Shadow\transfiner\detectron2\layers\csrc -IC:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\include -IC:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\include\TH -IC:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" -IC:\ProgramData\anaconda3\envs\transfiner\include -IC:\ProgramData\anaconda3\envs\transfiner\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.35.32215\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt" -c C:\Users\Shadow\transfiner\detectron2\layers\csrc\nms_rotated\nms_rotated_cuda.cu -o C:\Users\Shadow\transfiner\build\temp.win-amd64-cpython-37\Release\Users\Shadow\transfiner\detectron2\layers\csrc\nms_rotated\nms_rotated_cuda.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -O3 -DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86
nvcc fatal : Unsupported gpu architecture 'compute_86'
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\utils\cpp_extension.py", line 1539, in _run_ninja_build
env=env)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "setup.py", line 207, in
cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_init_.py", line 87, in setup
return distutils.core.setup(**attrs)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\command\build.py", line 132, in run
self.run_command(cmd_name)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools\dist.py", line 1208, in run_command
super().run_command(command)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
_build_ext.run(self)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\Cython\Distutils\old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\command\build_ext.py", line 346, in run
self.build_extensions()
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\utils\cpp_extension.py", line 670, in build_extensions
build_ext.build_extensions(self)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\Cython\Distutils\old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\command\build_ext.py", line 468, in build_extensions
self._build_extensions_serial()
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\command\build_ext.py", line 494, in _build_extensions_serial
self.build_extension(ext)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
_build_ext.build_extension(self, ext)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\setuptools_distutils\command\build_ext.py", line 556, in build_extension
depends=ext.depends,
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\utils\cpp_extension.py", line 652, in win_wrap_ninja_compile
with_cuda=with_cuda)
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\utils\cpp_extension.py", line 1255, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "C:\ProgramData\anaconda3\envs\transfiner\lib\site-packages\torch\utils\cpp_extension.py", line 1555, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

do you have any idea what could cause this error?

kind regards,

Paul

issues with Detectron2

is there anyone facing issue with detectron2? I have been facing a problem with installing detectron2 from the last step setup.py in transfiner folder

.

You can slightly reduce the number of limit here if you have a tight memory.

Originally posted by @lkeab in #35 (comment)

ImportError: DLL load failed: The specified module cannot be found.

(transfiner) D:\python\new_maskrcnn\transfiner-main\transfiner\tools>python train_net.py
Traceback (most recent call last):
File "train_net.py", line 27, in
from detectron2.data import MetadataCatalog
File "D:\python\new_maskrcnn\transfiner-main\transfiner\tools\detectron2\data_init_.py", line 2, in
from . import transforms # isort:skip
File "D:\python\new_maskrcnn\transfiner-main\transfiner\tools\detectron2\data\transforms_init_.py", line 4, in
from .transform import *
File "D:\python\new_maskrcnn\transfiner-main\transfiner\tools\detectron2\data\transforms\transform.py", line 19, in
from PIL import Image
File "C:\Users\daskv\Miniconda3\envs\transfiner\lib\site-packages\PIL\Image.py", line 100, in
from . import _imaging as core
ImportError: DLL load failed: The specified module cannot be found.

Another question: is this usable on Windows?

My environment:
antlr-python-runtime 4.9.3 pyhd8ed1ab_1 conda-forge
blas 1.0 mkl
ca-certificates 2022.12.7 h5b45459_0 conda-forge
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cloudpickle 2.2.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cudatoolkit 11.0.221 h74a9793_0
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
cython 0.29.33 pypi_0 pypi
dataclasses 0.8 pyhc8e2a94_3 conda-forge
fonttools 4.38.0 pypi_0 pypi
freetype 2.12.1 ha860e81_0
fvcore 0.1.5.post20221221 pyhd8ed1ab_0 conda-forge
imageio 2.26.1 pypi_0 pypi
imgviz 1.7.2 pypi_0 pypi
intel-openmp 2021.4.0 haa95532_3556
iopath 0.1.9 pyhd8ed1ab_0 conda-forge
jpeg 9b hb83a4c4_2
kiwisolver 1.4.4 pypi_0 pypi
kornia 0.5.11 pypi_0 pypi
labelme 5.1.1 pypi_0 pypi
lerc 3.0 hd77b12b_0
libdeflate 1.17 h2bbff1b_0
libpng 1.6.39 h8cc25b3_0
libtiff 4.5.0 h8a3f274_0
libuv 1.44.2 h2bbff1b_0
libwebp 1.2.4 h2bbff1b_0
libwebp-base 1.2.4 h2bbff1b_1
lz4-c 1.9.4 h2bbff1b_0
matplotlib 3.5.3 pypi_0 pypi
matplotlib-base 3.4.3 py37h4a79c79_2 conda-forge
mkl 2021.4.0 haa95532_640
mkl-service 2.4.0 py37h2bbff1b_0
mkl_fft 1.3.1 py37h277e83a_0
mkl_random 1.2.2 py37hf11a4ad_0
natsort 8.3.1 pypi_0 pypi
networkx 2.6.3 pypi_0 pypi
ninja 1.11.1 pypi_0 pypi
ninja-base 1.10.2 h6d14046_5
numpy 1.21.5 py37h7a0a035_3
numpy-base 1.21.5 py37hca35cd5_3
omegaconf 2.3.0 pyhd8ed1ab_0 conda-forge
opencv-python 4.4.0.40 pypi_0 pypi
openssl 1.1.1t h2bbff1b_0
packaging 23.0 pypi_0 pypi
pillow 9.4.0 pypi_0 pypi
pip 22.3.1 py37haa95532_0
portalocker 1.4.0 py_0 conda-forge
pycocotools 2.0.4 py37h3a130e4_1 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pyqt5 5.15.9 pypi_0 pypi
pyqt5-qt5 5.15.2 pypi_0 pypi
pyqt5-sip 12.11.1 pypi_0 pypi
python 3.7.16 h6244533_0
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python_abi 3.7 2_cp37m conda-forge
pytorch 1.7.1 py3.7_cuda110_cudnn8_0 pytorch
pywavelets 1.3.0 pypi_0 pypi
pywin32 305 py37h2bbff1b_0 anaconda
pyyaml 6.0 py37hcc03f2d_4 conda-forge
qtpy 2.3.0 pypi_0 pypi
scikit-image 0.19.3 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
setuptools 65.6.3 py37haa95532_0
six 1.16.0 pyhd3eb1b0_1
sqlite 3.41.1 h2bbff1b_0
tabulate 0.9.0 pyhd8ed1ab_1 conda-forge
termcolor 2.2.0 pyhd8ed1ab_0 conda-forge
tifffile 2021.11.2 pypi_0 pypi
tk 8.6.12 h2bbff1b_0
torchaudio 0.7.2 py37 pytorch
torchvision 0.8.2 py37_cu110 pytorch
tornado 6.2 py37hcc03f2d_0 conda-forge
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 pypi_0 pypi
typing_extensions 4.3.0 py37haa95532_0
vc 14.2 h21ff451_1
vs2015_runtime 14.27.29016 h5e58377_2
wheel 0.38.4 py37haa95532_0
wincertstore 0.2 py37haa95532_2
xz 5.2.10 h8cc25b3_1
yacs 0.1.8 pyhd8ed1ab_0 conda-forge
yaml 0.2.5 h8ffe710_2 conda-forge
zlib 1.2.13 h8cc25b3_0
zstd 1.5.2 h19a0ad4_0

Some model parameters or buffers are not found in the checkpoint

When I tried to run demo.py this warning occured, how could I solve it?

Some model parameters or buffers are not found in the checkpoint:
roi_heads.mask_head.deconv_uncertain.{bias, weight}
roi_heads.mask_head.encoder.conv_fuse.{bias, weight}
roi_heads.mask_head.encoder.conv_r1.0.{bias, weight}
roi_heads.mask_head.encoder.conv_r1.2.{bias, weight}
roi_heads.mask_head.encoder.layers.0.linear1.{bias, weight}
roi_heads.mask_head.encoder.layers.0.linear2.{bias, weight}
roi_heads.mask_head.encoder.layers.0.norm1.{bias, weight}
roi_heads.mask_head.encoder.layers.0.norm2.{bias, weight}
roi_heads.mask_head.encoder.layers.0.self_attn.out_proj.{bias, weight}
roi_heads.mask_head.encoder.layers.0.self_attn.{in_proj_bias, in_proj_weight}
roi_heads.mask_head.encoder.layers.1.linear1.{bias, weight}
roi_heads.mask_head.encoder.layers.1.linear2.{bias, weight}
roi_heads.mask_head.encoder.layers.1.norm1.{bias, weight}
roi_heads.mask_head.encoder.layers.1.norm2.{bias, weight}
roi_heads.mask_head.encoder.layers.1.self_attn.out_proj.{bias, weight}
roi_heads.mask_head.encoder.layers.1.self_attn.{in_proj_bias, in_proj_weight}
roi_heads.mask_head.encoder.layers.2.linear1.{bias, weight}
roi_heads.mask_head.encoder.layers.2.linear2.{bias, weight}
roi_heads.mask_head.encoder.layers.2.norm1.{bias, weight}
roi_heads.mask_head.encoder.layers.2.norm2.{bias, weight}
roi_heads.mask_head.encoder.layers.2.self_attn.out_proj.{bias, weight}
roi_heads.mask_head.encoder.layers.2.self_attn.{in_proj_bias, in_proj_weight}
roi_heads.mask_head.mask_fcn_uncertain1.{bias, weight}
roi_heads.mask_head.mask_fcn_uncertain2.{bias, weight}
roi_heads.mask_head.mask_fcn_uncertain3.{bias, weight}
roi_heads.mask_head.mask_fcn_uncertain4.{bias, weight}
roi_heads.mask_head.predictor_semantic_s.{bias, weight}
roi_heads.mask_head.predictor_uncertain.{bias, weight}
roi_heads.mask_pooler.conv_norm_relus_semantic.0.{bias, weight}
roi_heads.mask_pooler.conv_norm_relus_semantic.2.{bias, weight}
roi_heads.mask_pooler.conv_norm_relus_semantic.4.{bias, weight}
roi_heads.mask_pooler.conv_norm_relus_semantic.6.{bias, weight}

No predictions from the model!

When I was testing COCO2017, the following error occurred:

[04/02 16:19:28 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[04/02 16:19:28 d2.evaluation.coco_evaluation]: Saving results to ./output/inference/coco_instances_results.json
[04/02 16:19:28 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
/home/Bxl/hujinwu/experience/transfiner-main/detectron2/evaluation/coco_evaluation.py:329: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
self._logger.warn("No predictions from the model!")
WARNING [04/02 16:19:28 d2.evaluation.coco_evaluation]: No predictions from the model!
[04/02 16:19:28 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format:
[04/02 16:19:28 d2.evaluation.testing]: copypaste: Task: bbox
[04/02 16:19:28 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl
[04/02 16:19:28 d2.evaluation.testing]: copypaste: nan,nan,nan,nan,nan,nan

Hello. Please tell me why there are "No predictions from the model!“ thanks.

add web demo/model to Huggingface

Hi, would you be interested in adding transfiner to Hugging Face? The Hub offers free hosting, and it would make your work more accessible and visible to the rest of the ML community.

Example from other organizations:
Keras: https://huggingface.co/keras-io
Microsoft: https://huggingface.co/microsoft
Facebook: https://huggingface.co/facebook

Example spaces with repos:
github: https://github.com/salesforce/BLIP
Spaces: https://huggingface.co/spaces/salesforce/BLIP

github: https://github.com/facebookresearch/omnivore
Spaces: https://huggingface.co/spaces/akhaliq/omnivore

and here are guides for adding spaces/models/datasets to your org

How to add a Space: https://huggingface.co/blog/gradio-spaces
how to add models: https://huggingface.co/docs/hub/adding-a-model
uploading a dataset: https://huggingface.co/docs/datasets/upload_dataset.html

Please let us know if you would be interested and if you have any questions, we can also help with the technical implementation.

DIfference in the results from HF demo and COlab

Hi,

I am trying this model on Colab and I am seeing difference in the results compared to the results on Huggingfaces demo link.

I tried multiple variants (R101-3x-deforn, R50, R50-3x, R50-3x-deform). Huggingface results seem superior to me w.r.t mask accuracy. I have downloaded the models from the drive linkes in the readme.

Here is my code

config_file = "/content/transfiner/configs/transfiner/mask_rcnn_R_50_FPN_3x.yaml"

cfg = get_cfg()
cfg.merge_from_file(config_file)
cfg.MODEL.WEIGHTS = "/content/transfiner/pretrained_models/output_3x_transfiner_r50.pth"

cfg.MODEL.RETINANET.SCORE_THRESH_TEST = .5
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = .5
cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = .5
cfg.freeze()
im = cv2.imread("/content/img_000124785.jpg")
predictor = DefaultPredictor(cfg)
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1], MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=1.2)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(out.get_image()[:, :, ::-1])

I see this notice in the logs while I do the prediction which looks sucpicious

The checkpoint state_dict contains keys that are not used by the model:
  roi_heads.mask_pooler.conv_norm_relus_semantic.0.{bias, weight}
  roi_heads.mask_pooler.conv_norm_relus_semantic.2.{bias, weight}
  roi_heads.mask_pooler.conv_norm_relus_semantic.4.{bias, weight}
  roi_heads.mask_pooler.conv_norm_relus_semantic.6.{bias, weight}
  roi_heads.mask_head.deconv_bo.{bias, weight}
  roi_heads.mask_head.predictor_bo.{bias, weight}
  roi_heads.mask_head.encoder.layers.0.self_attn.{in_proj_bias, in_proj_weight}
  roi_heads.mask_head.encoder.layers.0.self_attn.out_proj.{bias, weight}
  roi_heads.mask_head.encoder.layers.0.linear1.{bias, weight}
  roi_heads.mask_head.encoder.layers.0.linear2.{bias, weight}
  roi_heads.mask_head.encoder.layers.0.norm1.{bias, weight}
  roi_heads.mask_head.encoder.layers.0.norm2.{bias, weight}
  roi_heads.mask_head.encoder.layers.1.self_attn.{in_proj_bias, in_proj_weight}
  roi_heads.mask_head.encoder.layers.1.self_attn.out_proj.{bias, weight}
  roi_heads.mask_head.encoder.layers.1.linear1.{bias, weight}
  roi_heads.mask_head.encoder.layers.1.linear2.{bias, weight}
  roi_heads.mask_head.encoder.layers.1.norm1.{bias, weight}
  roi_heads.mask_head.encoder.layers.1.norm2.{bias, weight}
  roi_heads.mask_head.encoder.layers.2.self_attn.{in_proj_bias, in_proj_weight}
  roi_heads.mask_head.encoder.layers.2.self_attn.out_proj.{bias, weight}
  roi_heads.mask_head.encoder.layers.2.linear1.{bias, weight}
  roi_heads.mask_head.encoder.layers.2.linear2.{bias, weight}
  roi_heads.mask_head.encoder.layers.2.norm1.{bias, weight}
  roi_heads.mask_head.encoder.layers.2.norm2.{bias, weight}
  roi_heads.mask_head.encoder.conv_fuse.{bias, weight}
  roi_heads.mask_head.encoder.conv_r1.0.{bias, weight}
  roi_heads.mask_head.encoder.conv_r1.2.{bias, weight}
  roi_heads.mask_head.mask_fcn_uncertain1.{bias, weight}
  roi_heads.mask_head.mask_fcn_uncertain2.{bias, weight}
  roi_heads.mask_head.mask_fcn_uncertain3.{bias, weight}
  roi_heads.mask_head.mask_fcn_uncertain4.{bias, weight}
  roi_heads.mask_head.deconv_uncertain.{bias, weight}
  roi_heads.mask_head.predictor_uncertain.{bias, weight}
  roi_heads.mask_head.predictor_semantic_s.{bias, weight}

could this be the reason for the difference in results. And in general, the accuracy of masks seem to be not good even with the R101-3x-deform model which has highest mAP compared to the results that I am seeing in the huggingface demo.

Any idea what might be happenning here?

TIA.

Unfair Comparison with HTC and RefineMask ?

Hi, authors, Congratulations on your CVPR paper.

I have one question about the experimental results in Table 9 of the paper.

The results you cite about HTC-R101 (39.7) and RefineMask-R101 (39.4) were trained with 20epoch and 2x schedules respectively, and they even did not use multi-scale training. But the model Mask Transfiner - R101 (40.7) was trained with a 3x schedule and multi-scale data augmentation (the default training setting in detecton2). If trained with the same setting, RefineMask-R101 is 41.2, and the HTC-R101 is even higher. In addition, there are some errors in the cited results of Mask R-CNN, too.

So does it ? Thank you in advance.

Some problems about installation

Hello, author, I found some problems during installation. The PyTorch version supported by Detectron2 must be 1.8 or above. The PyTorch version in your installation guide is 1.7, which caused some problems and failed to deploy the project successfully.

About speed

Hi authors, I have some questions about the speed of your method. I downloaded your pretrained resnet-101 model weight and ran it with a Titan RTX (24GB) GPU, the inference speed is about 1 second/per img.
d2.evaluation.evaluator INFO: Inference done 1144/1250. Dataloading: 0.0014 s/iter. Inference: 0.9478 s/iter. Eval: 0.0187 s/iter. Total: 0.9679 s/iter. ETA=0:01:42
In your paper you report with resnet-101 dcn, your model can run at 6.1FPS also using a Titan RTX GPU. So how could I get the same inference speed as given in your paper?
By the way, the inference speed of PointRend is about 0.1 second/per img, which is 10x faster of your model in my experiment.
d2.evaluation.evaluator INFO: Inference done 987/1250. Dataloading: 0.0014 s/iter. Inference: 0.0888 s/iter. Eval: 0.0190 s/iter. Total: 0.1093 s/iter. ETA=0:00:28
Waiting for your reply.

Asking for code component

Hi @lkeab,
Thank you for your consideration.

I see that the proposed method is mainly in mask_head.py, however, I find the code is hard to keep track. Would you mind showing me where is the code for this component below?

image

CUDA out of memory

Your team has done an excellent job. I would like to know that when I use four NVIDIA RTX 2080 and the batch_size is set to the minimum of 4, the output is always ' CUDA out of memory' when I run it. I would like to know if there are any parameters in the model that can be reduced to solve this problem. Thank you very much.

about the detected incoherent masks module

Hello! I am interesting in your work. But I have a question about that
use a simple fully convolutional network (four 3×3 Convs) followed by a binary classifier to predict the coars- est incoherence masks. Then, the detected lower-resolution masks are upsampled and fused with the larger-resolution feature in neighboring level to guide the finer incoherence predictions, where only **single 1×1 convolution layer i**s em- ployed.
single 1*1 convolutin layer, I can't find in the code. And can you give a guide, thanks!!!

Visual error

[02/27 16:57:48 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01733.jpg: detected 0 instances in 3.42s
1%|█ | 1/165 [00:03<09:34, 3.50s/it][02/27 16:57:48 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01090.jpg: detected 0 instances in 0.17s
1%|██ | 2/165 [00:03<04:15, 1.57s/it][02/27 16:57:48 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01755.jpg: detected 0 instances in 0.19s
2%|███ | 3/165 [00:03<02:34, 1.05it/s][02/27 16:57:48 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01601.jpg: detected 0 instances in 0.16s
2%|████▏ | 4/165 [00:04<01:46, 1.51it/s][02/27 16:57:49 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01586.jpg: detected 0 instances in 0.13s
3%|█████▏ | 5/165 [00:04<01:17, 2.06it/s][02/27 16:57:49 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01773.jpg: detected 0 instances in 0.16s
4%|██████▏ | 6/165 [00:04<01:02, 2.55it/s][02/27 16:57:49 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01318.jpg: detected 0 instances in 0.16s
4%|███████▎ | 7/165 [00:04<00:51, 3.07it/s][02/27 16:57:49 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01757.jpg: detected 0 instances in 0.12s
5%|████████▎ | 8/165 [00:04<00:43, 3.62it/s][02/27 16:57:49 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01600.jpg: detected 0 instances in 0.19s
5%|█████████▎ | 9/165 [00:05<00:40, 3.81it/s][02/27 16:57:50 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01086.jpg: detected 0 instances in 0.12s
6%|██████████▎ | 10/165 [00:05<00:35, 4.36it/s][02/27 16:57:50 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01093.jpg: detected 0 instances in 0.15s
7%|███████████▎ | 11/165 [00:05<00:33, 4.59it/s][02/27 16:57:50 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01590.jpg: detected 0 instances in 0.17s
7%|████████████▎ | 12/165 [00:05<00:33, 4.60it/s][02/27 16:57:50 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01988.jpg: detected 0 instances in 0.18s
8%|█████████████▍ | 13/165 [00:05<00:33, 4.58it/s][02/27 16:57:50 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01990.jpg: detected 0 instances in 0.12s
8%|██████████████▍ | 14/165 [00:06<00:30, 5.01it/s][02/27 16:57:51 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01348.jpg: detected 0 instances in 0.16s
9%|███████████████▍ | 15/165 [00:06<00:30, 4.97it/s][02/27 16:57:51 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01100.jpg: detected 0 instances in 0.12s
10%|████████████████▍ | 16/165 [00:06<00:27, 5.39it/s][02/27 16:57:51 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01975.jpg: detected 0 instances in 0.18s
10%|█████████████████▌ | 17/165 [00:06<00:28, 5.17it/s][02/27 16:57:51 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01729.jpg: detected 0 instances in 0.19s
11%|██████████████████▌ | 18/165 [00:06<00:30, 4.84it/s][02/27 16:57:51 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01111.jpg: detected 0 instances in 0.13s
12%|███████████████████▌ | 19/165 [00:07<00:29, 5.02it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01983.jpg: detected 0 instances in 0.14s
12%|████████████████████▌ | 20/165 [00:07<00:28, 5.04it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01085.jpg: detected 0 instances in 0.14s
13%|█████████████████████▋ | 21/165 [00:07<00:28, 5.05it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01445.jpg: detected 0 instances in 0.14s
13%|██████████████████████▋ | 22/165 [00:07<00:28, 5.07it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01102.jpg: detected 0 instances in 0.13s
14%|███████████████████████▋ | 23/165 [00:07<00:26, 5.31it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01756.jpg: detected 0 instances in 0.18s
15%|████████████████████████▋ | 24/165 [00:08<00:28, 5.02it/s][02/27 16:57:52 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01608.jpg: detected 0 instances in 0.12s
15%|█████████████████████████▊ | 25/165 [00:08<00:25, 5.39it/s][02/27 16:57:53 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01760.jpg: detected 0 instances in 0.15s
16%|██████████████████████████▊ | 26/165 [00:08<00:25, 5.35it/s][02/27 16:57:53 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01964.jpg: detected 0 instances in 0.12s
16%|███████████████████████████▊ | 27/165 [00:08<00:24, 5.59it/s][02/27 16:57:53 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01766.jpg: detected 0 instances in 0.14s
17%|████████████████████████████▊ | 28/165 [00:08<00:24, 5.56it/s][02/27 16:57:53 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01616.jpg: detected 0 instances in 0.15s
18%|█████████████████████████████▉ | 29/165 [00:08<00:25, 5.36it/s][02/27 16:57:53 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01323.jpg: detected 0 instances in 0.13s
18%|██████████████████████████████▉ | 30/165 [00:09<00:24, 5.57it/s][02/27 16:57:54 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01598.jpg: detected 0 instances in 0.29s
19%|███████████████████████████████▉ | 31/165 [00:09<00:30, 4.45it/s][02/27 16:57:54 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01744.jpg: detected 0 instances in 0.13s
19%|████████████████████████████████▉ | 32/165 [00:09<00:27, 4.75it/s][02/27 16:57:54 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01099.jpg: detected 0 instances in 0.16s
20%|██████████████████████████████████ | 33/165 [00:09<00:27, 4.86it/s][02/27 16:57:54 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01502.jpg: detected 0 instances in 0.12s
21%|███████████████████████████████████ | 34/165 [00:09<00:26, 5.03it/s][02/27 16:57:54 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01485.jpg: detected 0 instances in 0.15s
21%|████████████████████████████████████ | 35/165 [00:10<00:25, 5.13it/s][02/27 16:57:55 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01331.jpg: detected 0 instances in 0.13s
22%|█████████████████████████████████████ | 36/165 [00:10<00:24, 5.31it/s][02/27 16:57:55 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01365.jpg: detected 0 instances in 0.15s
22%|██████████████████████████████████████ | 37/165 [00:10<00:24, 5.31it/s][02/27 16:57:55 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01732.jpg: detected 0 instances in 0.16s
23%|███████████████████████████████████████▏ | 38/165 [00:10<00:24, 5.10it/s][02/27 16:57:55 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01098.jpg: detected 0 instances in 0.13s
24%|████████████████████████████████████████▏ | 39/165 [00:10<00:23, 5.32it/s][02/27 16:57:55 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01976.jpg: detected 0 instances in 0.18s
24%|█████████████████████████████████████████▏ | 40/165 [00:11<00:25, 4.99it/s][02/27 16:57:56 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01584.jpg: detected 0 instances in 0.17s
25%|██████████████████████████████████████████▏ | 41/165 [00:11<00:24, 4.97it/s][02/27 16:57:56 detectron2]: datasets/coco/coco_val/val_persimmon/DSC01605.jpg: detected 0 instances in 0.18s
25%|███████████████████████████████████████████▎ | 42/165 [00:11<00:25, 4.81it/s]
The target cannot be detected.I don't know why.

How to contribute?

Hi author,
Thanks for your excellent research. While trying to implement transfiner, I see some bugs and installations are manual. So I want to contribute to making transfiner better.
Can you give me some rules to contribute?

model questions

Hello author, your research is very good, I would like to ask where is the code folder corresponding to the quadtree module of your paper, I can't find it.

transfiner to detectron2

Hi,I want know if I can build the detectron2 environment first ,then I copy transfiner project code to detectron2 project?

Error list image path

hi,
when i run code your code in Colab have error at line 106 demo/demo.py:

106| args.input = glob.glob(os.path.expanduser(args.input[0]))

it not return a list of image path. but if change:

106| args.input = [args.input[0] + '/' + name for name in os.listdir(args.input[0]) if name.endswith(('png', 'jpg', 'jpeg'))]

it work!
And the mask result not good as you said, can you tell me how to get a perfect result as your paper

ResNet-50 model weight

Hi authors, thanks for this work! I want to ask will you release the pretrained resnet-50 model weights? Thanks!

no predicted box and category

Hi, I am a newbie. After running demo.py, there is only a mask in the predicted image result, but there is no predicted box and category? why is that? How can I make them show up?

Asking for code component

thank you for your nice work. I want to use these modules for salient object detection. I can't find the relevant code for the paper. Would you mind showing me where is the code for this component below?
image
besh wish to you.❤️

how to export trained weights to onnx

have tried detectron export script but failed.

  File "/workspace/transfiner/detectron2/export/caffe2_modeling.py", line 274, in forward
    detector_results, _ = self._wrapped_model.roi_heads(images, features, proposals)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/workspace/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 752, in forward
    pred_instances = self.forward_with_given_boxes(features, pred_instances)
  File "/workspace/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 778, in forward_with_given_boxes
    instances = self._forward_mask(features, instances)
  File "/workspace/transfiner/detectron2/modeling/roi_heads/roi_heads.py", line 852, in _forward_mask
    features = self.mask_pooler(features, boxes)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 725, in _call_impl
    result = self._slow_forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 709, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/workspace/transfiner/detectron2/export/c10.py", line 360, in forward
    assert roi_feat_shuffled.numel() > 0 and rois_idx_restore_int32.numel() > 0, (
AssertionError: Caffe2 export requires tracing with a model checkpoint + input that can produce valid detections. But no detections were obtained with the given checkpoint and input!

‘max_mem’ keeps increasing

During the training of my data set, ‘max_mem’ keeps increasing, and finally the video memory is not enough, which causes the training to stop.

Jagged edges in the output the masks

Hi,

Config: mask_rcnn_R_101_FPN_3x_deform.yaml
checkpoint: output_3x_transfiner_r101_deform.pth

All other settings are defaults from the yaml config.

I am doing prediction like this (I used the demo.py script as wel and the results are same). I don't see any errors/warning in the logs

im = cv2.imread(src_img_path)
predictor = DefaultPredictor(cfg)
outputs = predictor(im)

I see that the edges of the masks are not smooth even for the sample images in the repo. Is this expected? I am askng this because when I look at the gifs in the readme, they seem to have smooth edges. so, I was expecting smooth edges.

000000131444-out

000000126137-out
000000008844-out

000000157365-out

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.