Code Monkey home page Code Monkey logo

Comments (5)

github-actions avatar github-actions commented on July 20, 2024 1

πŸ‘‹ Hello @Waqas649, thank you for raising an issue about Ultralytics HUB πŸš€! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a πŸ› Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

from hub.

pderrenger avatar pderrenger commented on July 20, 2024

Hello! 😊 Great to hear about your progress with your custom YOLOv5 model.

Optimizing your model for deployment with OpenVINO can indeed streamline the inference process. While we don't have a predefined online toolkit, converting your model to an OpenVINO-compatible format involves a couple of steps, starting with exporting your model to ONNX format. After that, you can utilize the OpenVINO Model Optimizer to convert the ONNX model to an IR (Intermediate Representation) format that's optimized for inference on various Intel hardware.

For a detailed step-by-step guide, including necessary commands and further optimization tips, please refer to the "Deployment" section in our official documentation at https://docs.ultralytics.com/hub. It offers a comprehensive walkthrough that should fit your needs.

If you encounter any specific issues or have further questions during the process, feel free to reach out here again. Happy optimizing! πŸš€

from hub.

iraa777 avatar iraa777 commented on July 20, 2024

Hello,

I am trying to optimize a model I self-trained based on Yolov5s architecture. The device I am trying to run it on has no GPU and is not NVIDIA so I cannot use TensorRT. I originally tried a quantization code, but this is not optimizing my model at all. I attach the code here:
import torch
import torch.quantization
from utils.torch_utils import select_device
from models.common import DetectMultiBackend
from models.experimental import attempt_load
from torch.quantization import QuantStub, DeQuantStub, default_dynamic_qconfig, default_qconfig
from torch.utils.mobile_optimizer import optimize_for_mobile

Define GPU

device = select_device('')

Check original model

DetectMultiBackend(weights="20230329_s.pt", device=device, dnn=False, data='data.yaml', fp16=False)

Load your YOLOv5 model

ori_model = torch.load("20230329_s.pt", map_location = device)
print(ori_model.keys())
print(1)

Supongamos que el modelo estΓ‘ bajo la clave 'model'

model = ori_model['model']

Prepare your model for quantization

model.eval()

Define quantization configuration targeting all layers

qconfig = default_qconfig
qconfig_dict = {'': qconfig}

Apply quantization to the model

quantized_model = torch.quantization.quantize_dynamic(
model, # Your model
qconfig_spec=qconfig_dict, # Quantization configuration
dtype=torch.qint8 # Target data type after quantization
)
print(2)

Edit model field on original model

ori_model['model'] = quantized_model

Save the quantized model

torch.save(ori_model, "quantized.pt")
print(3)
I would appreciate if anyone has had experience with this. Maybe tehre is a tool available that I don't have knowledge about that can make my work much easier.

Thankyou,

Irati

from hub.

pderrenger avatar pderrenger commented on July 20, 2024

Hello Irati,

It sounds like you've given a good initial attempt at quantizing your YOLOv5 model! Quantization can indeed be tricky depending on the specific characteristics of the model and the target device's requirements.

Since you’re facing issues with standard dynamic quantization not effectively optimizing your model, you might consider trying static quantization which involves a few additional steps like preparing calibration data to better understand the distribution of inputs. This approach can sometimes yield better performance outcomes, especially if dynamic quantization doesn't meet your expectations.

Another alternative might be to explore pruning before quantization, which reduces the model size and complexity by removing unnecessary weights, potentially making the quantization more effective.

If these approaches don’t suit your needs, looking into other hardware-specific libraries compatible with your device's architecture (other than TensorRT) could be beneficial. Some devices have specialized libraries or SDKs designed to optimize models specifically for their architecture.

Keep experimenting and don’t hesitate to reach out if you have more questions! πŸ’ͺ

from hub.

github-actions avatar github-actions commented on July 20, 2024

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐

from hub.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.