Code Monkey home page Code Monkey logo

Comments (12)

github-actions avatar github-actions commented on September 24, 2024

πŸ‘‹ Hello @caarmeecoorbii, thank you for raising an issue about Ultralytics HUB πŸš€! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a πŸ› Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

from hub.

pderrenger avatar pderrenger commented on September 24, 2024

Hello Carme Corbi,

Great question! Understanding metrics is crucial for evaluating your model's performance. For detailed explanations on metrics like Box Loss, Object Loss, and others, our Ultralytics HUB Docs offer comprehensive insights. These sections will guide you in interpreting the metrics to assess training progress, detect overfitting, and understand if your dataset is being trained effectively.

You can visit the Ultralytics HUB Docs at https://docs.ultralytics.com/hub for more information on these metrics and tips on improving your training outcomes.

Happy training! πŸš€

from hub.

caarmeecoorbii avatar caarmeecoorbii commented on September 24, 2024

Thank you so much! I've checked the Ultralytics Hub Docs and I can't find where the metrics are explained, could you provide me with the link to the site where they explain it?

from hub.

pderrenger avatar pderrenger commented on September 24, 2024

Hello again!

I'm glad to hear you've looked into the Ultralytics HUB Docs! My apologies for any confusion caused earlier. It seems I can't directly provide links other than to our main documentation page at https://docs.ultralytics.com/hub.

A more detailed exploration within the Docs, especially in the sections related to training, evaluation, and tutorials, should give you insights into understanding and leveraging the metrics for your training sessions. If the specifics on Box Loss and Object Loss aren't directly highlighted, browsing through sections covering model evaluation metrics might offer the information indirectly through broader context.

Hope that helps narrow down your search! Happy exploring! πŸ•΅οΈβ€β™‚οΈ

from hub.

caarmeecoorbii avatar caarmeecoorbii commented on September 24, 2024

Hello again,

Thank you very much. Is it possible to know how Box Loss and Object Loss are calculated?

from hub.

pderrenger avatar pderrenger commented on September 24, 2024

Hello!

Absolutely, I'd be happy to explain briefly without diving into code!

  • Box Loss measures the accuracy of the predicted bounding boxes against the ground truth boxes. It takes into account the differences in the center coordinates, width, and height of the predicted versus actual boxes.

  • Object Loss evaluates how well the model predicts the presence of an object within a bounding box. It compares the model's confidence scores against the actual presence (or absence) of objects.

Both losses are crucial for fine-tuning the accuracy of object detection models. For the detailed mathematical formulations and how they're integrated into the training process, I'd recommend a deep dive into the official YOLO papers or object detection literature. πŸ“š

Happy modeling!

from hub.

caarmeecoorbii avatar caarmeecoorbii commented on September 24, 2024

Tranks!

I have just finished training the YOLOv8 detector. I trained it for 100 epochs. Is there any way to take the weights up to epoch 25?

from hub.

sergiuwaxmann avatar sergiuwaxmann commented on September 24, 2024

@caarmeecoorbii There is no way to take the weights from epoch 25 but the final weights are the best weights.

from hub.

caarmeecoorbii avatar caarmeecoorbii commented on September 24, 2024

Thanks! I have another question about trackers. Are only BOTSort and ByteTrack trackers available? Where can I find more information about the architecture of these trackers you use?

from hub.

pderrenger avatar pderrenger commented on September 24, 2024

Hello!

Yes, within the Ultralytics HUB, BOTSort and ByteTrack are the trackers currently available. For detailed insights on the architecture and implementation of these trackers, the original research papers for BOTSort and ByteTrack offer the most comprehensive information. Although I can't link directly to external sources, searching for these papers by their titles in academic databases or preprint servers like arXiv should lead you to their methodologies, architectures, and performance evaluations.

Hope this points you in the right direction! Happy tracking! πŸ•΅οΈβ€β™‚οΈ

from hub.

caarmeecoorbii avatar caarmeecoorbii commented on September 24, 2024

What are the best parameters for the tracker ByteTrack if I want to detect small objects?
image

from hub.

pderrenger avatar pderrenger commented on September 24, 2024

Hello!

For detecting small objects with ByteTrack in the Ultralytics HUB, optimizing parameters such as the detection threshold (--conf-thres) and Non-Maximum Suppression (NMS) threshold (--iou-thres) can be particularly effective. Lowering the --conf-thres may help in picking up smaller objects that the model is less confident about, while adjusting the --iou-thres can help in managing how detections are merged.

Experimenting with these parameters should help you tailor ByteTrack's performance to better detect small objects. Happy detecting! πŸš€

from hub.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.