sefaburakokcu / quantized-yolov5 Goto Github PK
View Code? Open in Web Editor NEWLow Precision(quantized) Yolov5
License: GNU General Public License v3.0
Low Precision(quantized) Yolov5
License: GNU General Public License v3.0
I read the LPYOLO paper and it referenced the YOLOv3 version but this repo links to YOLOv5 for model training from scratch.
Which Ultralytic version would you recommend for full FPGA deployment? Thanks!
So I tried to train yolov5m-quant.yaml with coco128.yaml. When it's done, i want to export it but error come like this
Traceback (most recent call last):
File "/content/quantized-yolov5/models/experimental.py", line 97, in attempt_load
ema = ckpt['ema' if ckpt.get('ema') else 'model'].float()
KeyError: 'model'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/quantized-yolov5/export.py", line 418, in
main(opt)
File "/content/quantized-yolov5/export.py", line 406, in main
run(**vars(opt))
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/quantized-yolov5/export.py", line 317, in run
model = attempt_load(weights, map_location=device, inplace=True, fuse=True) # load FP32 model
File "/content/quantized-yolov5/models/experimental.py", line 105, in attempt_load
ema.load_state_dict(ckpt)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for Model:
Missing key(s) in state_dict: "model.13.m.0.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.13.m.1.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.17.m.0.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.17.m.1.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.20.m.0.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.20.m.1.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.23.m.0.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value", "model.23.m.1.quant_identity.act_quant.fused_activation_quant_proxy.tensor_quant.scaling_impl.value".
size mismatch for model.24.m.0.weight: copying a param with shape torch.Size([255, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 192, 1, 1]).
size mismatch for model.24.m.1.weight: copying a param with shape torch.Size([255, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 384, 1, 1]).
size mismatch for model.24.m.2.weight: copying a param with shape torch.Size([255, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([18, 768, 1, 1]).
Do you have some advice or suggestion about this error? Thankyou
Hey @bestamigunay @sefaburakokcu ,
The FPS mentioned in paper for 4W4A is about 18 FPS achieved through proper pipelining. I was wondering if you could provide the code files for that. Thanks in advance!
I have already seen the other issue mentioning yolov5. The quantised yolov1 is very good, but there is a difference with yolov5 that is non-trivial.
Thank you.
I encountered the following error while reproducing your code:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
The specific details are as follows:
Plotting labels...
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/train/exp
Starting training for 300 epochs...
Epoch gpu_mem box obj cls labels img_size
0%| | 0/183 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/train.py", line 653, in
main(opt)
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/train.py", line 545, in main
train(opt.hyp, opt, device, callbacks)
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/train.py", line 333, in train
pred = model(imgs) # forward
File "/home/zhujh/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/models/yolo.py", line 156, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/models/yolo.py", line 179, in _forward_once
x = m(x) # run
File "/home/zhujh/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/data/master21/zhujh/software/pythonProject/quantized-yolov5-quantized_yolo/models/common.py", line 150, in forward
x = self.conv(x)
File "/home/zhujh/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/zhujh/anaconda3/envs/yolov5/lib/python3.7/site-packages/brevitas/nn/quant_conv.py", line 191, in forward
return self.forward_impl(input)
File "/home/zhujh/anaconda3/envs/yolov5/lib/python3.7/site-packages/brevitas/nn/quant_layer.py", line 311, in forward_impl
output_scale = output_scale * quant_input.scale.view(output_scale_shape)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Process finished with exit code 1
May I ask if there is a good and effective solution,thanks.
Hi!first of all, thank you very much for your selfless sharing and updates, which have been very helpful for my learning. You mentioned earlier that quantized YOLOv5 cannot be exported for use with FINN. What are the main layers that are difficult to implement in this context? Do you have any good ideas for replacing these layers?
Originally posted by @ramyen1 in #36 (comment)
Hi, I'm trying to reproduce your results with yolov1-tiny model. How did you convert the data of widerface into a correct format? Cause when I download the zip it only contains images.
Thanks for the great work!
Hello, I want to quantize a yolov5 model and deploy on it ZCU104 using the DPU. I want to know if the quantized model with Brevitas can be compiled and deployed on the DPU.
Is quantized yolov5 available in the repo?
I could see only the quantized yolov1 model using the quantized layers. So are quantized yolov5 models available in this repo or do I need to build them using the quantized layers?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.