Comments (12)
After channel pruning, the resulting checkpoint files still save weight tensors in their original sizes, including those all-zero channels. Therefore, the checkpoint files will not be smaller.
We have provided a model conversion script, tools/conversion/export_pb_tflite_models.py
, to generate *.pb and *.tflite models that are smaller after channel pruning. See the tutorial documentation for detailed usage.
from pocketflow.
@xiaomr Or you can delete the channels whose elements are all zeros by yourself.
from pocketflow.
Thanks for the explanation, It works! with the help of tools/conversion/export_pb_tflite_models.py, I
got the smaller compressed pb。Here comes a few another question, Firstly, it seems this tool only support the fullprecison mode and dis_chn_pruned mode,because both of them have the collection of images final and logits final which are necessary for this tool, when I use this tool for channel pruning mode, it will throw out error. Second, In my experiment the inference time of fullprecision mode and dis_chn_pruned mode is the same , in details, it seems during convertion it will add many 1*1 convolution for channel selection , which means pruning doesn't get actual speedup in GPU?
from pocketflow.
from pocketflow.
- We are investigating the model conversion issue for channel pruning module.
- After channel pruning, 1x1 convolution will be added for channel selection and this will bring speed-up for CPU-based inference with TF-Lite, according to our evaluation results. The speed-up on GPU may be negligible.
from pocketflow.
Bug: cannot convert the compressed model from channel pruning module to *.pb & *.tflite models.
from pocketflow.
@jiaxiang-wu , Tank you ! Another observation makes me confused is that in full precision mode, model_transformed.pb is much faster than model_original.pb, about 5X speed up in GPU , the channels are not be pruned, so why the graph edition (seems only contain dropout related edition) can accelerate so much?
from pocketflow.
@xiaomr Can you list the detailed time comparison, and describe how these two models are obtained?
from pocketflow.
@jiaxiang-wu The details are as following:
I set the mode of full precision to train resnet20 cifar10 and I get checkpoint file of
model.ckpt.data-00000-of-00001, model.ckpt.index and model.ckpt.meta in ./model_eval after training, then I use tools/conversion/export_pb_tflite_models.py to convert model, I add inference time test code in test_pb_model function of the script, test the average time of sess.run of several samples except for the first 10 warm up, the tools will generate two pbs files in the directory, which are called model_original.pb and model_transformed.pb, time is 122ms and 20 ms respectly in my GPU , I am confused about the acceleration 。
from pocketflow.
@xiaomr Could you please upload all the files under the models_eval
directory, so that we can re-produce your issue?
from pocketflow.
models_eval.zip
@jiaxiang-wu @psyyz10
from pocketflow.
@xiaomr full precision learner will not do any compression, it is just a learner to train a model. Please use other learners to compress the model.
from pocketflow.
Related Issues (20)
- cifar10_channel pruned 的示例,通道剪枝(channel_pruning) 导出修改了计算图之后,速度比之前的更慢了! HOT 1
- Can the compression method provided by pocketflow be applied to MASK R-CNN? HOT 1
- QQ group HOT 1
- 我可以只用模型压缩部分么?
- TypeError: forward_train() missing 1 required positional argument: 'objects'
- Missing 1 required positional argument in constructor : data_format
- Download Pretrain Model But Get 502 Bad Gateway Error HOT 1
- You must feed a value for placeholder tensor 'model/input_1' with dtype float and shape [?,160,240,1]
- Question about export_chn_pruned_tflite_model.py HOT 1
- TF Version compatibility HOT 2
- Failed to create session
- Is it possible to compress the keras model with Pocket Flow
- Question about UniformLearner HOT 2
- Default tensorboard log output is huge
- FRCNN with VOC: Cannot batch tensors with different shapes in component 1.
- IndexError: list index out of range HOT 3
- Other issues:
- auto 通道裁剪问题
- test
- TF-Plus for Multi-GPU Training
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pocketflow.