lavender105 / dff Goto Github PK
View Code? Open in Web Editor NEWCode for Dynamic Feature Fusion for Semantic Edge Detection https://arxiv.org/abs/1902.09104
License: MIT License
Code for Dynamic Feature Fusion for Semantic Edge Detection https://arxiv.org/abs/1902.09104
License: MIT License
Hi there thanks for your Research, its a real Gold Mine, I just have a couple questions:
Firstly, In the CaseNet paper up sampling is used for Side 1, but in your code no up sampling was applied to Side 1.
Secondly, the paper mentions "bi-linear upsampling". Which would require the use of this pytorch function torch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)
and setting mode
to 'bilinear'
. Why is ConvTranspose2d used in your code?
Thanks in advance!
I am trying to run train.py, but it gets stuck when running the following code.:
if args.cuda:
self.model = DataParallelModel(self.model).cuda()
self.criterion = DataParallelCriterion(self.criterion).cuda()
I checked the location of DataParallelModel and found that there is no cuda() method in it.
On the contrary, when I modified the following code snippet, the program ran successfully:
if args.cuda:
#self.model = DataParallelModel(self.model).cuda()
self.model = DataParallelModel(self.model)
#self.criterion = DataParallelCriterion(self.criterion).cuda()
self.criterion = DataParallelCriterion(self.criterion)
Why?
It's a nice work! Will you release the pretrained model? thanks
RuntimeError: Error(s) in loading state_dict for DFF:
Missing key(s) in state_dict: "ada_learner.conv1.0.weight", "ada_learner.conv1.0.bias", "ada_learner.conv1.1.weight", "ada_learner.conv1.1.bias", "ada_learner.conv1.1.running_mean", "ada_learner.conv1.1.running_var", "ada_learner.conv2.0.weight", "ada_learner.conv2.0.bias", "ada_learner.conv2.1.weight", "ada_learner.conv2.1.bias", "ada_learner.conv2.1.running_mean", "ada_learner.conv2.1.running_var", "ada_learner.conv3.0.weight", "ada_learner.conv3.0.bias", "ada_learner.conv3.1.weight", "ada_learner.conv3.1.bias", "ada_learner.conv3.1.running_mean", "ada_learner.conv3.1.running_var", "side5_w.0.weight", "side5_w.0.bias", "side5_w.1.weight", "side5_w.1.bias", "side5_w.1.running_mean", "side5_w.1.running_var", "side5_w.2.weight".
Unexpected key(s) in state_dict: "EW1.conv1.0.weight", "EW1.conv1.0.bias", "EW1.conv1.1.weight", "EW1.conv1.1.bias", "EW1.conv1.1.running_mean", "EW1.conv1.1.running_var", "EW1.conv1.1.num_batches_tracked", "EW1.conv2.0.weight", "EW1.conv2.0.bias", "EW1.conv2.1.weight", "EW1.conv2.1.bias", "EW1.conv2.1.running_mean", "EW1.conv2.1.running_var", "EW1.conv2.1.num_batches_tracked", "EW1.conv3.0.weight", "EW1.conv3.0.bias", "EW1.conv3.1.weight", "EW1.conv3.1.bias", "EW1.conv3.1.running_mean", "EW1.conv3.1.running_var", "EW1.conv3.1.num_batches_tracked", "side5_ew.0.weight", "side5_ew.0.bias", "side5_ew.1.weight", "side5_ew.1.bias", "side5_ew.1.running_mean", "side5_ew.1.running_var", "side5_ew.1.num_batches_tracked", "side5_ew.2.weight".
Thanks for your work! I am trying to use this model in my work, but facing such a problem, maybe I am not using this in a proper way? Is there any way to help me with this bug, thanks!
As we can see, in the PyTorch version, there are eleven files about the models. It would be a difficult work to find out the definition file of the proposed network.
Thanks for sharing your work, could give me some advice to manipulate the parameters to get the performance of the paper?
run code/demoPreproc_gen_png_label.m
运行该命令,无法得到处理的边缘图,我是在windows下运行的,请问有方法解决吗?
In your paper, you mentioned the base_size for cityscapes/sbd dataset is 640/512 and the crop_size is 640/352. However, in your code base_sbd.py and base_cityscapes.py, the base_size is never used and only crop_size is used. Can you tell us whether it is intentionally designed to do so or there is something wrong with your code ?
Thanks.
请问处理SBD和NYUv2的代码可以修改一下处理NYUv2数据集吗?我生的四十个通道照片,其中所有在最后一张通道照片中有很类别,并不是一张通道一张类别,需要修改哪些地方吗?radius还是?
Hello, thanks for your work which is really efficient to detection and segmentation.
But i cannot open the files downloaded from the the BaiduYun and Google, maybe there is some errors during the process of compressing. I beg you can check it.
Hi,
I am trying to run test.py but there doesn't appear to be a .cityscapes_orig in "dff-master/lib/python3.6/site-packages/encoding/datasets/".
Traceback (most recent call last):
File "test.py", line 17, in
import encoding.utils as utils
File "/home/miniconda2/envs/dff-master/lib/python3.6/site-packages/encoding/init.py", line 14, in
from . import nn, functions, dilated, parallel, utils, datasets
File "/home/miniconda2/envs/dff-master/lib/python3.6/site-packages/encoding/datasets/init.py", line 7, in
from .cityscapes_orig import CityscapesSegmentation
ModuleNotFoundError: No module named 'encoding.datasets.cityscapes_orig'
The cityscapes.py file in encoding/datasets also does not have CityscapesSegmentation in it.
Has anyone else had this issue and been able to resolve it?
你好,请问一下cityscapes 没有test 的label 你是怎么test的??文中指标是val集得来的吗?
RT
To build the specific pytorch version (branched ed02619) of this project, I followed this video: https://www.youtube.com/watch?v=sGWLjbn5cgs
However, it was failed.
There is the log below:
https://gist.github.com/JANGSOONMYUN/a380e6876122d2cf5c1beb3aaa5d2e65
Are there any ways to build the branched pytorch on Windows?
followed install introduction, bug got this error
Hi,
I got the following error on import encoding.utils as utils
:
Is pytorch-encoding
really required?
Can I just use the default utils
and nn
package provided by pytorch?
Traceback (most recent call last):
File "G:\My Drive\Debvrat - shared\Codes\Edge Detection\CASENet\DFF-master\exps\train.py", line 17, in <module>
import encoding.utils as utils
File "C:\Anaconda3\lib\site-packages\encoding\__init__.py", line 14, in <module>
from . import nn, functions, dilated, parallel, utils, datasets
File "C:\Anaconda3\lib\site-packages\encoding\nn\__init__.py", line 12, in <module>
from .encoding import *
File "C:\Anaconda3\lib\site-packages\encoding\nn\encoding.py", line 18, in <module>
from ..functions import scaled_l2, aggregate, pairwise_cosine
File "C:\Anaconda3\lib\site-packages\encoding\functions\__init__.py", line 2, in <module>
from .encoding import *
File "C:\Anaconda3\lib\site-packages\encoding\functions\encoding.py", line 14, in <module>
from .. import lib
File "C:\Anaconda3\lib\site-packages\encoding\lib\__init__.py", line 15, in <module>
], build_directory=cpu_path, verbose=False)
File "C:\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 680, in load
is_python_module)
File "C:\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 866, in _jit_compile
with_cuda=with_cuda)
File "C:\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 915, in _write_ninja_file_and_build
with_cuda=with_cuda)
File "C:\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1195, in _write_ninja_file
'cl']).decode().split('\r\n')
File "C:\Anaconda3\lib\subprocess.py", line 411, in check_output
**kwargs).stdout
File "C:\Anaconda3\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.
My system spec.
i7-6700
RTX2070 8G
16 DDR4 RAM
Anaconda3
python 3.6
Cuda 10.0 (because RTX gpu only supports cuda 10.0 or higher.
Traceback (most recent call last):
File "train.py", line 17, in
import encoding.utils as utils
File "/home/robotics/anaconda3/envs/dff/lib/python3.6/site-packages/encoding/init.py", line 14, in
from . import nn, functions, dilated, parallel, utils, datasets
File "/home/robotics/anaconda3/envs/dff/lib/python3.6/site-packages/encoding/utils/init.py", line 13, in
from .lr_scheduler_orig import LR_Scheduler_orig
ModuleNotFoundError: No module named 'encoding.utils.lr_scheduler_orig'
Hi, I have met this problem when I run train.py.
I installed everything following INSTALL.md(https://github.com/Lavender105/DFF/blob/master/INSTALL.md).
And also finished Preprocessing.
Please let me know how to solve it. Thank you.
Hello
np.unpackbits get 24 bits for Cityscapes we reverse the order (total 19 classes) for SBD datasets we reverse the order (total 20 classes)
For ADE20K that has 150 classes what i do for getting 150 bits.
def _sync_transform(self, img, mask):
#hy modified this function
crop_size = self.crop_size
w, h = img.size
x1 = random.randint(0, w - crop_size)
y1 = random.randint(0, h - crop_size)
img = img.crop((x1, y1, x1+crop_size, y1+crop_size))
mask = mask.crop((x1, y1, x1+crop_size, y1+crop_size))
#np.unpackbits get 24 bits, we extract [:,:5:] and reverse the order (total 19 classes), i.e. [:,:,-1:-20:-1]
mask = np.unpackbits(np.array(mask), axis=2)[:,:,-1:-20:-1]
mask = torch.from_numpy(np.array(mask)).float()
mask = mask.transpose(0, 1).transpose(0, 2) #channel first
return img, mask
Thank you
First of all, i got inspired and enjoyed your work. Now I have a question: What is the difference between training with instance-sensitive(inst)/non-instance-sensitive(cls) edge label?
Since the input of your loss function is the network output without sigmoid function, I find it difficult to understand your code. What are 'max val' and 'log weight' here? Is it different from the loss function in CASENet? Could you give me a brief explanation? Thanks a lot.
Lines 16 to 43 in b215d3c
I can't get the performance of SBD dataset in the papper with single GPU, with the code 'python train.py --dataset sbd --model dff --backbone resnet101 --checkname dff --base-size 352 --crop-size 352 --epochs 10 --batch-size 4 --lr 0.05 --workers 8'. There is a huge gap to the performance in the papper.
Is there something needed to be modify? Due to hardware reasons, I can only set batchsize to 4. Should I modify other parameters?
I used the pretrained dff_cityscapes_resnet101.pth
model and test it on Cityscapes Val Set
. The performance is about 0.714 Mean MF-ODS, which is much lower than the paper reported (0.804 mean MF-ODS). The maxDist is set to 0.02 and all other settings are the same with the code you provided.
Can you help explain why?
Thanks
When I tried to train on my own dataset, I met the flowing error messages.
Does anyone have a solution to this error? Thank you.
Traceback (most recent call last):
File "train.py", line 211, in
trainer.training(epoch)
File "train.py", line 149, in training
loss.backward()
AttributeError: 'tuple' object has no attribute 'backward'
Good job! Do you have already-trained model (the whole model)? I want to test on my images. Thanks
您好,在https://github.com/Lavender105/DFF/blob/master/INSTALL.md 中提到,要配置cudnn和nccl相关的路径,需要填写“cudnn root directory”和“nccl root directory”,请问这两个路径是如何查看?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.