Comments (5)
@newExplore-hash
modifiy "networks/AugmentCE2P.py" as follow is better
# from ..modules import InPlaceABNSync
# BatchNorm2d = functools.partial(InPlaceABNSync, activation='none')
BatchNorm2d = nn.BatchNorm2d
class InPlaceABNSync(BatchNorm2d):
def __init__(self, *args, **kwargs):
super(InPlaceABNSync, self).__init__(*args, **kwargs)
self.act = nn.LeakyReLU()
def forward(self, input):
output = super(InPlaceABNSync, self).forward(input)
output = self.act(output)
return output
from self-correction-human-parsing.
Hi @newExplore-hash ,
In brief, you don't need to install the latest InPlaceABNSync by yourself. The cuda files in ./modules
are compiled by Pytorch automatically.
As your own environmental problems, please refer to https://github.com/mapillary/inplace_abn/tree/v0.1.1 for some needed packages.
from self-correction-human-parsing.
Moreover, if you only need inference, you can replace the InPlaceABNSync layer with one BatchNorm2D layer and one LeakyRelu layer in PyTorch.
from self-correction-human-parsing.
Moreover, if you only need inference, you can replace the InPlaceABNSync layer with one BatchNorm2D layer and one LeakyRelu layer in PyTorch.
yes, i just use this for inference, i know the role of InPlaceABNSync is to reduce the memory required for training deep networks. So i use nn.BatchNorm2d and nn.LeakyReLU instead of InPlaceABNSync in the script of AugmentCE2P.py for inference, but there is a problem as following:
Traceback (most recent call last):
File "simple_extractor.py", line 166, in
main()
File "simple_extractor.py", line 115, in main
model.load_state_dict(new_state_dict)
File "/root/miniconda3/envs/human_parsing/lib/python3.6/site-packages/torch/nn/modules/module.py", line 845, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: "decoder.conv3.4.weight", "decoder.conv3.4.bias", "decoder.conv3.4.running_mean", "decoder.conv3.4.running_var".
Unexpected key(s) in state_dict: "decoder.conv3.2.weight", "decoder.conv3.3.bias", "decoder.conv3.3.running_mean", "decoder.conv3.3.running_var".
size mismatch for decoder.conv3.3.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
this error just happened in Decoder_Module for self.conv3, and there is normal for PSPModule and Edge_Module and other operations(self.conv1 and self.conv2) for Decoder_Module .
this is code:
class Decoder_Module(nn.Module):
"""
Parsing Branch Decoder Module.
"""
def __init__(self, num_classes):
super(Decoder_Module, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(512, 256, kernel_size=1, padding=0, dilation=1, bias=False),
InPlaceABNSync(256),
#######
nn.LeakyReLU()
)
self.conv2 = nn.Sequential(
nn.Conv2d(256, 48, kernel_size=1, stride=1, padding=0, dilation=1, bias=False),
InPlaceABNSync(48),
########
nn.LeakyReLU()
)
self.conv3 = nn.Sequential(
nn.Conv2d(304, 256, kernel_size=1, padding=0, dilation=1, bias=False),
InPlaceABNSync(256),
########
nn.LeakyReLU(),
nn.Conv2d(256, 256, kernel_size=1, padding=0, dilation=1, bias=False),
InPlaceABNSync(256),
#########
nn.LeakyReLU()
)
self.conv4 = nn.Conv2d(256, num_classes, kernel_size=1, padding=0, dilation=1, bias=True)
tks
from self-correction-human-parsing.
I modified "networks/AugmentCE2P.py" which replace C++ extended “InPlaceABNSync” for inferencing on cpu
successful!
# from ..modules import InPlaceABNSync
# BatchNorm2d = functools.partial(InPlaceABNSync, activation='none')
BatchNorm2d = nn.BatchNorm2d
class ReplacePlaceABNSync(BatchNorm2d):
def __init__(self, *args, **kwargs):
super(ReplacePlaceABNSync, self).__init__(*args, **kwargs)
self.act = nn.LeakyReLU()
def forward(self, input):
output = super(ReplacePlaceABNSync, self).forward(input)
output = self.act(output)
return output
from self-correction-human-parsing.
Related Issues (20)
- Training errors on LIP HOT 4
- How can i run it on videos ? HOT 1
- LIP dataset not available HOT 3
- How to visualize edge prediction? HOT 1
- Where is the pre-trained model for Multiple Human Parsing?
- Training Time is huge
- CIHP dataset Training
- Testing on CPU HOT 12
- MulitHuman Parsing Resutls
- How to trained on custom dataset? HOT 2
- LIP dataset - crashed link HOT 1
- CUDA problem on training HOT 1
- generating human-parsing HOT 1
- Hi! I want to convert model to onnx but i have some error with InPlaceABNSync. can you give me some solutions. My error "RuntimeError: ONNX export failed: Couldn't export Python operator InPlaceABNSyn"
- Pascal-Person-Part求助
- Training with others backbone
- Excuse me, why is it stuck here all the time?I need help.
- Real-time test problem
- Cannot reproduce the results from the paper - Problem with InPlaceABNSync?
- 我下载到本地后,在pycharm上运行出的结果和colab上边运行的结果不一样是为啥呀??都是按照colab的步骤做的呀! HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from self-correction-human-parsing.