hankye / pagcp Goto Github PK
View Code? Open in Web Editor NEWPAGCP for the compression of YOLOv5
License: GNU General Public License v3.0
PAGCP for the compression of YOLOv5
License: GNU General Public License v3.0
Hi, thanks for ur work. However, I failed to run the compressor.py following your commands written in readme. I met quite a lot errors in the importing part, for example,
ImportError: cannot import name 'color_list' from 'utils.plots' (/PRBNet_PyTorch/prb_PAGCP/utils/plots.py)/PRBNet_PyTorch/prb_PAGCP/utils/metrics.py)
ImportError: cannot import name 'fitness' from 'utils.metrics' (
compress.py: error: unrecognized arguments: --sequential
Could u give me some advice on these? Thanks for your time and patience.
就是我将Conv换成了GhostConv之后 还有C3换成了C3Ghost 那我该如何修改文件 来完成减枝呢 一直出错误
Hello, I see from your paper that you used PAGCP on the NYUv2 dataset, but in the code here you only have implementation for VOC and COCO, is it possible to get the implementation for NYUv2? If not, any tips would be appreciated. Thanks!
作者您好!请问一下我在colab上运行这条命令!python compress.py --model yolov5s.yaml --dataset COCO --data coco.yaml --batch 64 --weights /content/PAGCP-main/yolov5s.pt --initial_rate 0.06 --initial_thres 6. --topk 0.8 --exp --device 0后,
pruning 0/51: group3, base_loss:539.222351, base_b:215.046875, base_o:243.176361, base_c:80.999107, ratio:0.05, thres:0.06
10% 4/40 [00:02<00:22, 1.61it/s]一直卡在这里,请问是什么原因。
你好,感谢开源,有个部分代码没有看明白,想咨询一下
def set_group(self, model):
bottleneck_index = [2, 4, 6]
self.groups = [[f'model[{i}].m[{n}].cv2.conv' for n in
range(len(model.module[i].m if hasattr(model, 'module') else model[i].m))] + [
f'model[{i}].cv1.conv'] for i in bottleneck_index]
在sensitivity.py中这个函数的group是有什么意思, bottleneck_index 表示是要剪枝的层在model中的索引吗?模型在剪枝时时只剪枝这些层吗
Hello, may I ask if it is possible to add a pruning method on a self built dataset? Currently, the dataset can only select ['VOC ',' COCO ']
作者,您好。在框图中有FLOPs<Γ?,Γ是保留计算量或者参数比例,请问文中的这个参数值是设为多少,对应在代码里面是哪个变量?
Hi, will you consider coming out with yolov8 pruning as a follow-up?
作者您好,我使用您的框架进行剪枝,参数量、计算量都砍了一半左右,但是剪枝后模型推理时间变化不大,请问这正常吗?
thx for your great job! i wonder about the compression result of Faster Rcnn on larger dataset like coco, which isn't seen in your paper. could you share some information about this matter?
Is it possible to use this algorithm to find the most inappropriate channels for our task without deleting them?!
(We can select the less suitable channels without deleting them.)
Thank you for your guidance in this regard.
博主你好,我计算出每层卷积的FLOPs后,发现不论是按照升序还是降序,都与实际剪枝顺序不符合,这是什么问题呢
作者你好,我这边有点疑问想请问一下,在使用您的代码的时候,我在电脑上调通并且已经跑完了,但是第二天醒来再继续跑的时候就发生了报错。想请问一下这是怎么回事,我也并没有动过任何地方。
raceback (most recent call last):
File "C:/YOLO/PAGCP-main/compress.py", line 612, in
main(opt)
File "C:/YOLO/PAGCP-main/compress.py", line 576, in main
opt.save_dir = str(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok))
File "C:\YOLO\PAGCP-main\utils\general.py", line 804, in increment_path
matches = [re.search(rf"%s{sep}(\d+)" % path, d) for d in dirs]
File "C:\YOLO\PAGCP-main\utils\general.py", line 804, in
matches = [re.search(rf"%s{sep}(\d+)" % path, d) for d in dirs]
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\re.py", line 201, in search
return _compile(pattern, flags).search(string)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\re.py", line 304, in _compile
p = sre_compile.compile(pattern, flags)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_compile.py", line 764, in compile
p = sre_parse.parse(p, flags)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 948, in parse
p = _parse_sub(source, state, flags & SRE_FLAG_VERBOSE, 0)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 443, in _parse_sub
itemsappend(_parse(source, state, verbose, nested + 1,
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 525, in _parse
code = _escape(source, this, state)
File "D:\ProgramData\Anaconda3\envs\GPU1\lib\sre_parse.py", line 426, in _escape
raise source.error("bad escape %s" % escape, len(escape))
re.error: bad escape \e at position 10
你好,我看了论文,好像没做YOLOv5s的剪枝测试,是因为s模型已经Scaling后冗余没大模型多,然后剪枝性能提升不大吗?
I git cloned the repository and tried to run the flow suggested in the README.md and run the compress.py file, but I get an error as "Start Pruning" step begins.
Attaching the image to show the corresponding error.
Error: "RuntimeError: result type Float can't be cast to the desired output type long int"
Command: python compress.py --model test1 --dataset COCO --data coco.yaml --batch 64 --weights yolov5s.pt --initial_rate 0.06 --initial_thres 6. --topk 0.8 --exp --device 0
剪枝過一次的模型,想再剪枝一次壓縮參數量,目前是將剪枝過後的模型先載入,再載入權重,但會遇到以下問題,想請問這作法是否是對的?
RuntimeError: Given groups=1, expected weight to be at least 1 at dimension 0, but got weight of size [0, 2, 1, 1] instead
提供的pkl文件怎么加载到yolov5然后导出onnx?
我利用源码在自己的数据集上进行剪枝,但是剪枝率很低只有24%,请问修改哪些参数可以提高模型的剪枝率。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.