Code Monkey home page Code Monkey logo

mkdocs's Introduction

YOLO Vision banner

中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | हिन्दी | العربية

Ultralytics CI Ultralytics Code Coverage YOLOv8 Citation Docker Pulls Discord
Run on Gradient Open In Colab Open In Kaggle

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!

To request an Enterprise License please complete the form at Ultralytics Licensing.

YOLOv8 performance plots

Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics Instagram space Ultralytics Discord

Documentation

See below for a quickstart installation and usage example, and see the YOLOv8 Docs for full documentation on training, validation, prediction and deployment.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

PyPI version Downloads

pip install ultralytics

For alternative installation methods including Conda, Docker, and Git, please refer to the Quickstart Guide.

Usage

CLI

YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command:

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'

yolo can be used for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See the YOLOv8 CLI Docs for examples.

Python

YOLOv8 may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
model.train(data="coco8.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
path = model.export(format="onnx")  # export the model to ONNX format

See YOLOv8 Python Docs for more examples.

Notebooks

Ultralytics provides interactive notebooks for YOLOv8, covering training, validation, tracking, and more. Each notebook is paired with a YouTube tutorial, making it easy to learn and implement advanced YOLOv8 features.

Docs Notebook YouTube
YOLOv8 Train, Val, Predict and Export Modes Open In Colab Ultralytics Youtube Video
Ultralytics HUB QuickStart Open In Colab Ultralytics Youtube Video
YOLOv8 Multi-Object Tracking in Videos Open In Colab Ultralytics Youtube Video
YOLOv8 Object Counting in Videos Open In Colab Ultralytics Youtube Video
YOLOv8 Heatmaps in Videos Open In Colab Ultralytics Youtube Video
Ultralytics Datasets Explorer with SQL and OpenAI Integration 🚀 New Open In Colab Ultralytics Youtube Video

Models

YOLOv8 Detect, Segment and Pose models pretrained on the COCO dataset are available here, as well as YOLOv8 Classify models pretrained on the ImageNet dataset. Track mode is available for all Detect, Segment and Pose models.

Ultralytics YOLO supported tasks

All Models download automatically from the latest Ultralytics release on first use.

Detection (COCO)

See Detection Docs for usage examples with these models trained on COCO, which include 80 pre-trained classes.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val detect data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=coco.yaml batch=1 device=0|cpu
Detection (Open Image V7)

See Detection Docs for usage examples with these models trained on Open Image V7, which include 600 pre-trained classes.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 18.4 142.4 1.21 3.5 10.5
YOLOv8s 640 27.7 183.1 1.40 11.4 29.7
YOLOv8m 640 33.6 408.5 2.26 26.2 80.6
YOLOv8l 640 34.9 596.9 2.43 44.1 167.4
YOLOv8x 640 36.3 860.6 3.56 68.7 260.6
  • mAPval values are for single-model single-scale on Open Image V7 dataset.
    Reproduce by yolo val detect data=open-images-v7.yaml device=0
  • Speed averaged over Open Image V7 val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=open-images-v7.yaml batch=1 device=0|cpu
Segmentation (COCO)

See Segmentation Docs for usage examples with these models trained on COCO-Seg, which include 80 pre-trained classes.

Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-seg 640 36.7 30.5 96.1 1.21 3.4 12.6
YOLOv8s-seg 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m-seg 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l-seg 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x-seg 640 53.4 43.4 712.1 4.02 71.8 344.1
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val segment data=coco-seg.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val segment data=coco-seg.yaml batch=1 device=0|cpu
Pose (COCO)

See Pose Docs for usage examples with these models trained on COCO-Pose, which include 1 pre-trained class, person.

Model size
(pixels)
mAPpose
50-95
mAPpose
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-pose 640 50.4 80.1 131.8 1.18 3.3 9.2
YOLOv8s-pose 640 60.0 86.2 233.2 1.42 11.6 30.2
YOLOv8m-pose 640 65.0 88.8 456.3 2.00 26.4 81.0
YOLOv8l-pose 640 67.6 90.0 784.5 2.59 44.4 168.6
YOLOv8x-pose 640 69.2 90.2 1607.1 3.73 69.4 263.2
YOLOv8x-pose-p6 1280 71.6 91.2 4088.7 10.04 99.1 1066.4
  • mAPval values are for single-model single-scale on COCO Keypoints val2017 dataset.
    Reproduce by yolo val pose data=coco-pose.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val pose data=coco-pose.yaml batch=1 device=0|cpu
OBB (DOTAv1)

See OBB Docs for usage examples with these models trained on DOTAv1, which include 15 pre-trained classes.

Model size
(pixels)
mAPtest
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-obb 1024 78.0 204.77 3.57 3.1 23.3
YOLOv8s-obb 1024 79.5 424.88 4.07 11.4 76.3
YOLOv8m-obb 1024 80.5 763.48 7.61 26.4 208.6
YOLOv8l-obb 1024 80.7 1278.42 11.83 44.5 433.8
YOLOv8x-obb 1024 81.36 1759.10 13.23 69.5 676.7
  • mAPtest values are for single-model multiscale on DOTAv1 dataset.
    Reproduce by yolo val obb data=DOTAv1.yaml device=0 split=test and submit merged results to DOTA evaluation.
  • Speed averaged over DOTAv1 val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu
Classification (ImageNet)

See Classification Docs for usage examples with these models trained on ImageNet, which include 1000 pretrained classes.

Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls 224 69.0 88.3 12.9 0.31 2.7 4.3
YOLOv8s-cls 224 73.8 91.7 23.4 0.35 6.4 13.5
YOLOv8m-cls 224 76.8 93.5 85.4 0.62 17.0 42.7
YOLOv8l-cls 224 76.8 93.5 163.0 0.87 37.5 99.7
YOLOv8x-cls 224 79.0 94.6 232.0 1.01 57.4 154.8
  • acc values are model accuracies on the ImageNet dataset validation set.
    Reproduce by yolo val classify data=path/to/ImageNet device=0
  • Speed averaged over ImageNet val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val classify data=path/to/ImageNet batch=1 device=0|cpu

Integrations

Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with Roboflow, ClearML, Comet, Neural Magic and OpenVINO, can optimize your AI workflow.


Ultralytics active learning integrations

Roboflow ClearML ⭐ NEW Comet ⭐ NEW Neural Magic ⭐ NEW
Label and export your custom datasets directly to YOLOv8 for training with Roboflow Automatically track, visualize and even remotely train YOLOv8 using ClearML (open-source!) Free forever, Comet lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions Run YOLOv8 inference up to 6x faster with Neural Magic DeepSparse

Ultralytics HUB

Experience seamless AI with Ultralytics HUB ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly Ultralytics App. Start your journey for Free now!

Ultralytics HUB preview image

Contribute

We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our Contributing Guide to get started, and fill out our Survey to send us feedback on your experience. Thank you 🙏 to all our contributors!

Ultralytics open-source contributors

License

Ultralytics offers two licensing options to accommodate diverse use cases:

  • AGPL-3.0 License: This OSI-approved open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the LICENSE file for more details.
  • Enterprise License: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through Ultralytics Licensing.

Contact

For Ultralytics bug reports and feature requests please visit GitHub Issues, and join our Discord community for questions and discussions!


Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics Instagram space Ultralytics Discord

mkdocs's People

Contributors

glenn-jocher avatar pderrenger avatar ultralyticsassistant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

mkdocs's Issues

Author attribution problems

Author attributions are not always obtained correctly from git, and possibly neither also from the GitHub REST API fallback using email addresses (possibly because the user does not present a public email address on their profile).

Reproduce

git clone https://github.com/ultralytics/ultralytics
cd ultralytics
pip install -e ".[dev]"
mkdocs serve -f docs/mkdocs.yml

Output:

...
INFO    -  Building documentation...
INFO    -  Cleaning site directory
INFO    -  Converting notebook (execute=False): /Users/glennjocher/PycharmProjects/ultralytics/docs/en/datasets/explorer/explorer.ipynb
Running GitHub REST API for author [email protected]
Running GitHub REST API for author [email protected]
WARNING (mkdocs_ultralytics_plugin): No username found for [email protected]
Running GitHub REST API for author [email protected]
WARNING (mkdocs_ultralytics_plugin): No username found for [email protected]
Running GitHub REST API for author [email protected]
Running GitHub REST API for author [email protected]
Running GitHub REST API for author [email protected]
WARNING (mkdocs_ultralytics_plugin): No username found for [email protected]
Running GitHub REST API for author [email protected]
Running GitHub REST API for author [email protected]
WARNING (mkdocs_ultralytics_plugin): No username found for [email protected]
INFO    -  Documentation built in 72.59 seconds
INFO    -  [14:57:52] Watching paths for changes: 'docs/en', 'docs/mkdocs.yml'
INFO    -  [14:57:52] Serving on http://127.0.0.1:8000/

TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

Whenever I run mkdocs build with Ultrallytics configured as a plugin I get this error:

ERROR   -  Error building page 'index.md': unsupported operand type(s) for +: 'NoneType' and 'str'
Traceback (most recent call last):
  File "/DEV_DIRECTORY/essays/.venv/bin/mkdocs", line 8, in <module>
    sys.exit(cli())
             ^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/mkdocs/__main__.py", line 286, in build_command
    build.build(cfg, dirty=not clean)
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/mkdocs/commands/build.py", line 349, in build
    _build_page(
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/mkdocs/commands/build.py", line 235, in _build_page
    output = config.plugins.on_post_page(output, page=page, config=config)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/mkdocs/plugins.py", line 586, in on_post_page
    return self.run_event('post_page', output, page=page, config=config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/mkdocs/plugins.py", line 507, in run_event
    result = method(item, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/DEV_DIRECTORY/essays/.venv/lib/python3.11/site-packages/plugin/main.py", line 82, in on_post_page
    page_url = config['site_url'] + page.url.rstrip('/')
               ~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

When I remove the plugin from the mkdocs.yml file the build works fine.

I'm running Python 3.11.2 in a venv

This is my mkdocs.yml file:

site_name: BetaFaith Essays
theme:
  name: material
  palette:

    # Palette toggle for light mode
    - scheme: default
      primary: blue grey
      accent: deep orange
      toggle:
        icon: material/brightness-7
        name: Switch to dark mode

    # Palette toggle for dark mode
    - scheme: slate
      primary: black
      accent: deep orange
      toggle:
        icon: material/brightness-4
        name: Switch to light mode
  features:
    - navigation.tabs
    - navigation.instant
    - navigation.sections
markdown_extensions:
  - footnotes
plugins:
  - ultralytics

And my requirements.txt file:

mkdocs
mkdocs-material
mkdocs-ultralytics-plugin

Twitter cards aren't being generated

Thank you for this awesome plugin.

I've followed the instructions in the README of this project to enable and use it for my website written in Markdown.
The social card for LinkedIn is generated correctly, with the image and title being picked up from the page. However, the Twitter card is not getting generated.

My set up is: Files are in Markdown with no front matter; the site generator is MkDocs and deployment is through a GitHub actions through the gh-pages branch. The relevant portion in the mkdocs.yml file is:

plugins:
    - search
    - macros
    - ultralytics:
        verbose: True
        enabled: True
        default_image: "https://aninditabasu.github.io/comparative-mythology/images/logo.jpg"
        add_desc: True
        add_image: True
        add_keywords: False
        add_share_buttons: False
        add_dates: False
        add_authors: False

The generated HTML pages also contain the correct meta tags

<meta content="Dragon killings" name="title"/>
<meta content="website" property="og:type"/>
<meta content="https://aninditabasu.github.io/comparative-mythology/myths/dragon_slaying" property="og:url"/>
<meta content="Dragon killings" property="og:title"/>
<meta content="Dragons are mythical monsters that look like giant reptiles. The etymology of the word suggests that dragon is derived from the Greek word drakōn, which means snake 1. Snakes have, from the earliest times, been feared because of their human-killing venomous bite. It was easy for the ancients to imagine anything fearsome and cruel to be a dragon, and any person overcoming or killing such a menace to be a hero, worthy of adulation." property="og:description"/>
<meta content="../../images/here_be_dragons.png" property="og:image"/>
<meta content="summary_large_image" property="twitter:card"/>
<meta content="https://aninditabasu.github.io/comparative-mythology/myths/dragon_slaying" property="twitter:url"/>
<meta content="Dragon killings" property="twitter:title"/>
<meta content="Dragons are mythical monsters that look like giant reptiles. The etymology of the word suggests that dragon is derived from the Greek word drakōn, which means snake 1. Snakes have, from the earliest times, been feared because of their human-killing venomous bite. It was easy for the ancients to imagine anything fearsome and cruel to be a dragon, and any person overcoming or killing such a menace to be a hero, worthy of adulation." property="twitter:description"/>
<meta content="../../images/here_be_dragons.png" property="twitter:image"/>

However, the social card for Twitter isn't getting generated (LinkedIn card is generated fine).

Please let me know what further info is needed to help debug this issue.

test3

can you provide me the official ultraytics docs page URL?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.