Comments (4)
(1) 看起来剪坏了,建议把剪枝间隔设久一点试试,像 prune.py 里 resrep 方法的参数 prune_interval
是多少次迭代后就剪一次,不是 epoch,所以数据集规模大了建议把参数改大些。
(2) resrep吗?剪枝的那一次迭代会掉点,之后继续训一般还能恢复些。
from torch-model-compression.
(1)好的,谢谢前辈,您给的resrep示例中我看到是200次迭代进行一次压缩,我的数据量更多,我修改了您在训练时的部分代码如下:
begin epoch
print("training...")
for epoch in range(0, self.config["epoch"]):
# setting lr
if epoch <= self.config["warmup_epoch"]:
lr = 0.005
else:
lr = 0.005 * (0.995 ** ((epoch - 1) // 2))
self.config["lr"] = lr
self.variable_dict["epoch"] = epoch
self.run_hook(self.epoch_begin_hook)
self.variable_dict["avg_mentor"] = AvgMeter() # operate and update average
self.model.train()
# max_step = len(self.trainloader)
# print("This is the max_step", max_step)
for step, data in enumerate(tqdm(self.trainloader)):
# tqdm.write("Step: {}".format(step))
self.variable_dict["step"] = step + 1
self.variable_dict["iteration"] += 1
self.run_hook(self.iteration_begin_hook) # record prune_iteration
data = self._sample_to_device(data, self.variable_dict["base_device"])
c_sample_number = self._get_sample_number(data) # get the num_sample
predict = self.config["predict_function"](self.model, data)
self.variable_dict["loss"] = self.config["calculate_loss_function"](
predict, data
)
self.on_loss_backward()
self.variable_dict["loss"].backward()
self.after_loss_backward()
# add the gradient of penalty with decay to the gradient of precision parameter for the compactor
self.optimizer.step() # update parameter
self.optimizer.zero_grad() # zero out gradient
evaluate_result = self.config["evaluate_function"](predict, data)
evaluate_result["loss"] = self.variable_dict["loss"].item() # get high accuracy loss
self.variable_dict["avg_mentor"].update(
evaluate_result, c_sample_number
) # update average
# del evaluate_result, predict
# if self.variable_dict["iteration"] % self.config["log_interval"] == 0:
self.write_log(
self.variable_dict["epoch"],
self.variable_dict["iteration"],
self.variable_dict["avg_mentor"].get(),
self.config["lr"],
)
self.write_tensorboard(
"train_log",
self.variable_dict["iteration"],
self.variable_dict["avg_mentor"].get(),
)
self.run_hook(self.iteration_end_hook) # model prune, optimizer prune, compute flops
del self.variable_dict["loss"]
self.scheduler.step() # adjust lr
# evaluation/testing step
print("evaluating...")
self.variable_dict["test_avg_mentor"] = AvgMeter()
self.model.eval()
# for step, data in enumerate(self.testloader):
# self.variable_dict["step"] = step + 1
# data = self._sample_to_device(data, self.variable_dict["base_device"])
# c_sample_number = self._get_sample_number(data)
# predict = self.config["predict_function"](self.model, data)
evaluate_result = self.config["evaluate_function"](predict, data)
evaluate_result["loss"] = self.config["calculate_loss_function_test"](
predict, data
).item()
self.variable_dict["test_avg_mentor"].update(
evaluate_result, c_sample_number
) # update precision result and num_sample
if self.variable_dict["step"] % self.config["log_interval"] == 0: # (useless at present) decide log_out_period, log_interval=20
self.write_log(
self.variable_dict["epoch"],
self.variable_dict["step"],
self.variable_dict["test_avg_mentor"].get(),
)
# del evaluate_result, predict
self.write_log(
self.variable_dict["epoch"],
"final",
self.variable_dict["test_avg_mentor"].get(),
self.config["lr"],
)
self.write_tensorboard(
"test_log",
self.variable_dict["epoch"],
self.variable_dict["test_avg_mentor"].get(),
)
总结来说,由于语音方向回归模型数据量庞大,我没有按照您原来设置的迭代次数进行压缩,而是每个epoch压缩一次,那我改写下尝试2-3个epoch或者1.5个epoch等同的迭代次数来设置压缩频次。
(2)另外,想咨询下您,如果压缩正常,训练和验证的loss是否和正常训练模型一样以较平滑的曲线下降?
感谢前辈您的多次解答和帮助,祝您工作和生活一切顺利!
from torch-model-compression.
我只记得在cifar10上的resnet,训练期间验证集的精度不断涨,碰到剪枝就掉一点再继续涨,loss没印象了。
from torch-model-compression.
我只记得在cifar10上的resnet,训练期间验证集的精度不断涨,碰到剪枝就掉一点再继续涨,loss没印象了。
好的,谢谢前辈解答
from torch-model-compression.
Related Issues (20)
- Does there any params and speed comparasion on common models? HOT 2
- examples中prune.py运行报错 HOT 7
- self.index_mapping 代表什么意思?它是用来作什么的? HOT 1
- torchslim中在cifar10上的示例代码输出为64维,而不是10维 HOT 3
- 是否有相关的文章支持 HOT 2
- 有没有实验yolo系列QAT感知量化训练模型? HOT 1
- About how to import self-built models for compression HOT 1
- 关于resnet和自建模型prune时遇到的相同报错 HOT 1
- 剪枝时前面的层数正常,最后几层在对齐masks same size时报错
- Can the QAT-quantized model calculate inference speed? HOT 1
- 关于ResRep模型性能对比 HOT 4
- yolov5 resrep剪枝 HOT 5
- qat量化能够导出onnx,能够在tensorrt上进行部署吗? HOT 3
- resnet50剪枝报错 HOT 1
- How to quantize and compress the trained model??? HOT 1
- TypeError: cannot assign 'torch.cuda.FloatTensor' as parameter 'weight' (torch.nn.Parameter or None expected) HOT 6
- 剪枝分割网络报错-bisenetv2 HOT 17
- demo中剪枝后预测结果差距很大? HOT 15
- BiSeNet是否已经支持 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from torch-model-compression.