shawnbit / unet-family Goto Github PK
View Code? Open in Web Editor NEWPaper and implementation of UNet-related model.
Paper and implementation of UNet-related model.
Can you please explain how to run this your code i.e. what is the input image to the model look like, how I can add the label for trained images, what is the procedure that needs to follow in training?
如题
I am trying to train the model using a custom dataset my dataset contains BSDS500 images. when I trying to train the model this error occurs:
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
24 original = original.to(device)
25 original = original.unsqueeze(0)
---> 26 out = model(original)
C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
in forward(self, inputs)
51 up3 = self.up_concat3(up4,conv3) # 64128128
52 up2 = self.up_concat2(up3,conv2) # 32256256
---> 53 up1 = self.up_concat1(up2,conv1) # 16512512
54
55 final = self.final(up1)
C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in call(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
in forward(self, high_feature, *low_feature)
53 outputs0 = self.up(high_feature)
54 for feature in low_feature:
---> 55 outputs0 = torch.cat([outputs0, feature], 1)
56 return self.conv(outputs0)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 321 and 320 in dimension 2 at C:/w/1/s/tmp_conda_3.6_035809/conda/conda-bld/pytorch_1556683229598/work/aten/src\THC/generic/THCTensorMath.cu:71
`
Please help what is the problem and how can I solve it?
thanks a lot
Hi,
I tried to train the model using a pixel wise criterion(L1 loss), my input image has 3 channels and since the final conv2d layer is using n_classes(by default is set to 2), for out_channels, the output has 2 channels and the criterion can not compare these images. Why should out_channels should be 2? Can it change to 3?
the code:
self.final = nn.Conv2d(filters[0], n_classes, 1)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.