Comments (7)
Hi @cattpku,
You probably missed issue #85, but I think that what you are seeing is this same bug.
If you can, please try using branch https://github.com/NervanaSystems/distiller/tree/issue_85 which fixes this issue (for filters; channels-pruning is WiP).
As issue #85 explains, some networks have complicated data-dependencies which means that we can't arbitrarily prune filters, but we have to group them so that they all follow the same pruning decision. I realize this is somewhat hard to follow w/o more details so I plan to write and post a loner explanation in a couple of days.
Until then, you can create a PNG graph of your model and I can use it to illustrate to you the dependencies and how to express them.
Another option: take a look at this example, specifically at low_pruner_2
.
I hope this helps for now,
Neta
from distiller.
Thanks for your kind explanation, Neta. I will try the branch later.
from distiller.
Hi Neta,
I tried to use the recommended branch, but the error is still there and exactly the same.
from distiller.
@cattpku - any chance you can provide more information? For example, the model definition and your YAML schedule file can help me understand, and hopefully recreate what you are seeing.
Thanks
from distiller.
Hi Neta,
Sure. It is a DeepLabV3+ structure, definition is here:
`
class ASPP_module(nn.Module):
def init(self, inplanes, planes):
super(ASPP_module, self).init()
self.aspp0 = nn.Sequential(nn.Conv2d(inplanes, planes, kernel_size=1,
stride=1, padding=0, dilation=1, bias=False),
nn.BatchNorm2d(planes))
self.aspp1 = nn.Sequential(nn.Conv2d(inplanes, planes, kernel_size=3,
stride=1, padding=6, dilation=6, bias=False),
nn.BatchNorm2d(planes))
self.aspp2 = nn.Sequential(nn.Conv2d(inplanes, planes, kernel_size=3,
stride=1, padding=12, dilation=12, bias=False),
nn.BatchNorm2d(planes))
self.aspp3 = nn.Sequential(nn.Conv2d(inplanes, planes, kernel_size=3,
stride=1, padding=18, dilation=18, bias=False),
nn.BatchNorm2d(planes))
def forward(self, x):
#print(x.size())
x0 = self.aspp0(x)
x1 = self.aspp1(x)
x2 = self.aspp2(x)
x3 = self.aspp3(x)
return torch.cat((x0, x1, x2, x3), dim=1)
class DeepLab_v3_plus(nn.Module):
def init(self, nInputChannels=3, n_classes=2):
super(DeepLab_v3_plus, self).init()
# wrapped feature
self.wrapped_features = wrapper(1)
# ASPP
self.aspp = ASPP_module(stage_out_channels[2], 256)
# global pooling
self.global_avg_pool = nn.Sequential(nn.AdaptiveAvgPool2d((1, 1)),
nn.Conv2d(stage_out_channels[2], 256, 1, stride=1, bias=False))
self.conv = nn.Conv2d(1280, 256, 1, bias=False)
self.bn = nn.BatchNorm2d(256)
self.upsample_size = nn.Upsample(size = (60, 80),mode = 'bilinear', align_corners=None)
self.upsample = nn.Upsample(scale_factor=2, mode = 'bilinear', align_corners=None)
self.upsample_4 = nn.Upsample(scale_factor=4, mode = 'bilinear', align_corners=None)
self.last_conv = nn.Sequential(nn.Conv2d(256, n_classes, kernel_size=1, stride=1))
def forward(self, x):
x = self.wrapped_features(x)
x_aspp = self.aspp(x)
x_ = self.global_avg_pool(x)
x_ = self.upsample_size(x_)
x_concat = torch.cat((x_aspp, x_), dim=1)
x = self.conv(x_concat)
x = self.bn(x)
x = self.upsample(x)
x = self.last_conv(x)
x = self.upsample_4(x)
return x
`
And the YAML schedule is
version: 1
pruners:
filter_pruner_50:
class: 'L1RankedStructureParameterPruner'
group_type: Filters
desired_sparsity: 0.5
weights: [
module.aspp.aspp0.0.weight,
module.aspp.aspp1.0.weight,
module.aspp.aspp2.0.weight,
module.aspp.aspp3.0.weight,
module.global_avg_pool.1.weight]
extensions:
net_thinner:
class: 'FilterRemover'
thinning_func_str: remove_filters
arch: 'DeepLab_v3_plus'
dataset: 'my_own_data'
policies:
-
pruner:
instance_name: filter_pruner_50
epochs: [46] -
extension:
instance_name: net_thinner
epochs: [46]
The error comes from
x_concat = torch.cat((x_aspp, x_), dim=1) x = self.conv(x_concat)
I tried to manually modify 'module.conv' in 'thinning.py' by clearing the 'module.conv.weight', and then using 'append_module_directive' and 'append_param_directive' to assign the correct value, now my own experiment succeeded.
from distiller.
Hi @cattpku
"I tried to manually modify 'module.conv' in 'thinning.py' by clearing the 'module.conv.weight', and then using 'append_module_directive' and 'append_param_directive' to assign the correct value, now my own experiment succeeded." ==> can you share the fix?
I tried creating an environment for running DeepLab, but this is taking me too much time.
Thanks!
Neta
from distiller.
Sure.
All modifications are in 'def create_thinning_recipe_filters(sgraph, model, zeros_mask_dict):'
Firstly, I use 5 new variables to store the 'indices' of the 5 targeting layers (which are specified in my network definition above) before 'for successor in successors:':
`
if layer_name == 'module.aspp.aspp0.0':
indices_aspp0 = indices
elif layer_name == 'module.aspp.aspp1.0':
indices_aspp1 = torch.add(indices, 128)
elif layer_name == 'module.aspp.aspp2.0':
indices_aspp2 = torch.add(indices, 256)
elif layer_name == 'module.aspp.aspp3.0':
indices_aspp3 = torch.add(indices, 384)
elif layer_name == 'module.global_avg_pool.1':
indices_pool = torch.add(indices, 512)
`
Then before 'return thinning_recipe', I manually modify the 'module.conv' as following:
`
thinning_recipe.parameters['module.conv.weight'].clear()
append_module_directive(model, thinning_recipe, 'module.conv', key='in_channels', val=640)
append_param_directive(thinning_recipe, 'module.conv.weight', (1, torch.cat((indices_aspp0, indices_aspp1, indices_aspp2, indices_aspp3, indices_pool), 0)))
`
From my current experiment result, it seems worked for me.
from distiller.
Related Issues (20)
- Could you provide the checkpoints for structural pruning experiments?
- checkpoints example in example jupyter-notebook download denied. HOT 1
- Support for PyTorch 1.7? HOT 3
- Can't install pyglet, even when i cloned it form github
- Higher than 8-bit Quantization not working properly!?
- yolo4 custom object detection deep compression
- Why can't I use multi-GPU training
- How can I use the distilled model in embedded device?
- Combining quantization and pruning in Distiller
- Issue running compress_classifier.py HOT 1
- Reduce the yolov3 model size of keras(.h5) or darknet(.weight)
- Quantization don't reduce the model file size
- How to train my original dataset in distiller? HOT 1
- Error running 'pip install mintapi' on Raspberry Pi
- --load-serialized will make model fail to prune HOT 1
- QAT for LSTM
- outdated requirements? HOT 2
- Sensitivity Analysis
- Does it support translation model?
- Load quantization aware model checkpoint (inference) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from distiller.