Code Monkey home page Code Monkey logo

json2yolo's Introduction

YOLO Vision banner

中文 | 한국어 | 日本語 | Русский | Deutsch | Français | Español | Português | हिन्दी | العربية

Ultralytics CI Ultralytics Code Coverage YOLOv8 Citation Docker Pulls Discord
Run on Gradient Open In Colab Open In Kaggle

Ultralytics YOLOv8 is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of object detection and tracking, instance segmentation, image classification and pose estimation tasks.

We hope that the resources here will help you get the most out of YOLOv8. Please browse the YOLOv8 Docs for details, raise an issue on GitHub for support, and join our Discord community for questions and discussions!

To request an Enterprise License please complete the form at Ultralytics Licensing.

YOLOv8 performance plots

Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics Instagram space Ultralytics Discord

Documentation

See below for a quickstart installation and usage example, and see the YOLOv8 Docs for full documentation on training, validation, prediction and deployment.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

PyPI version Downloads

pip install ultralytics

For alternative installation methods including Conda, Docker, and Git, please refer to the Quickstart Guide.

Usage

CLI

YOLOv8 may be used directly in the Command Line Interface (CLI) with a yolo command:

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'

yolo can be used for a variety of tasks and modes and accepts additional arguments, i.e. imgsz=640. See the YOLOv8 CLI Docs for examples.

Python

YOLOv8 may also be used directly in a Python environment, and accepts the same arguments as in the CLI example above:

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
model.train(data="coco8.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image
path = model.export(format="onnx")  # export the model to ONNX format

See YOLOv8 Python Docs for more examples.

Notebooks

Ultralytics provides interactive notebooks for YOLOv8, covering training, validation, tracking, and more. Each notebook is paired with a YouTube tutorial, making it easy to learn and implement advanced YOLOv8 features.

Docs Notebook YouTube
YOLOv8 Train, Val, Predict and Export Modes Open In Colab Ultralytics Youtube Video
Ultralytics HUB QuickStart Open In Colab Ultralytics Youtube Video
YOLOv8 Multi-Object Tracking in Videos Open In Colab Ultralytics Youtube Video
YOLOv8 Object Counting in Videos Open In Colab Ultralytics Youtube Video
YOLOv8 Heatmaps in Videos Open In Colab Ultralytics Youtube Video
Ultralytics Datasets Explorer with SQL and OpenAI Integration 🚀 New Open In Colab Ultralytics Youtube Video

Models

YOLOv8 Detect, Segment and Pose models pretrained on the COCO dataset are available here, as well as YOLOv8 Classify models pretrained on the ImageNet dataset. Track mode is available for all Detect, Segment and Pose models.

Ultralytics YOLO supported tasks

All Models download automatically from the latest Ultralytics release on first use.

Detection (COCO)

See Detection Docs for usage examples with these models trained on COCO, which include 80 pre-trained classes.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 37.3 80.4 0.99 3.2 8.7
YOLOv8s 640 44.9 128.4 1.20 11.2 28.6
YOLOv8m 640 50.2 234.7 1.83 25.9 78.9
YOLOv8l 640 52.9 375.2 2.39 43.7 165.2
YOLOv8x 640 53.9 479.1 3.53 68.2 257.8
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val detect data=coco.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=coco.yaml batch=1 device=0|cpu
Detection (Open Image V7)

See Detection Docs for usage examples with these models trained on Open Image V7, which include 600 pre-trained classes.

Model size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n 640 18.4 142.4 1.21 3.5 10.5
YOLOv8s 640 27.7 183.1 1.40 11.4 29.7
YOLOv8m 640 33.6 408.5 2.26 26.2 80.6
YOLOv8l 640 34.9 596.9 2.43 44.1 167.4
YOLOv8x 640 36.3 860.6 3.56 68.7 260.6
  • mAPval values are for single-model single-scale on Open Image V7 dataset.
    Reproduce by yolo val detect data=open-images-v7.yaml device=0
  • Speed averaged over Open Image V7 val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val detect data=open-images-v7.yaml batch=1 device=0|cpu
Segmentation (COCO)

See Segmentation Docs for usage examples with these models trained on COCO-Seg, which include 80 pre-trained classes.

Model size
(pixels)
mAPbox
50-95
mAPmask
50-95
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-seg 640 36.7 30.5 96.1 1.21 3.4 12.6
YOLOv8s-seg 640 44.6 36.8 155.7 1.47 11.8 42.6
YOLOv8m-seg 640 49.9 40.8 317.0 2.18 27.3 110.2
YOLOv8l-seg 640 52.3 42.6 572.4 2.79 46.0 220.5
YOLOv8x-seg 640 53.4 43.4 712.1 4.02 71.8 344.1
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by yolo val segment data=coco-seg.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val segment data=coco-seg.yaml batch=1 device=0|cpu
Pose (COCO)

See Pose Docs for usage examples with these models trained on COCO-Pose, which include 1 pre-trained class, person.

Model size
(pixels)
mAPpose
50-95
mAPpose
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-pose 640 50.4 80.1 131.8 1.18 3.3 9.2
YOLOv8s-pose 640 60.0 86.2 233.2 1.42 11.6 30.2
YOLOv8m-pose 640 65.0 88.8 456.3 2.00 26.4 81.0
YOLOv8l-pose 640 67.6 90.0 784.5 2.59 44.4 168.6
YOLOv8x-pose 640 69.2 90.2 1607.1 3.73 69.4 263.2
YOLOv8x-pose-p6 1280 71.6 91.2 4088.7 10.04 99.1 1066.4
  • mAPval values are for single-model single-scale on COCO Keypoints val2017 dataset.
    Reproduce by yolo val pose data=coco-pose.yaml device=0
  • Speed averaged over COCO val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val pose data=coco-pose.yaml batch=1 device=0|cpu
OBB (DOTAv1)

See OBB Docs for usage examples with these models trained on DOTAv1, which include 15 pre-trained classes.

Model size
(pixels)
mAPtest
50
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B)
YOLOv8n-obb 1024 78.0 204.77 3.57 3.1 23.3
YOLOv8s-obb 1024 79.5 424.88 4.07 11.4 76.3
YOLOv8m-obb 1024 80.5 763.48 7.61 26.4 208.6
YOLOv8l-obb 1024 80.7 1278.42 11.83 44.5 433.8
YOLOv8x-obb 1024 81.36 1759.10 13.23 69.5 676.7
  • mAPtest values are for single-model multiscale on DOTAv1 dataset.
    Reproduce by yolo val obb data=DOTAv1.yaml device=0 split=test and submit merged results to DOTA evaluation.
  • Speed averaged over DOTAv1 val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val obb data=DOTAv1.yaml batch=1 device=0|cpu
Classification (ImageNet)

See Classification Docs for usage examples with these models trained on ImageNet, which include 1000 pretrained classes.

Model size
(pixels)
acc
top1
acc
top5
Speed
CPU ONNX
(ms)
Speed
A100 TensorRT
(ms)
params
(M)
FLOPs
(B) at 640
YOLOv8n-cls 224 69.0 88.3 12.9 0.31 2.7 4.3
YOLOv8s-cls 224 73.8 91.7 23.4 0.35 6.4 13.5
YOLOv8m-cls 224 76.8 93.5 85.4 0.62 17.0 42.7
YOLOv8l-cls 224 76.8 93.5 163.0 0.87 37.5 99.7
YOLOv8x-cls 224 79.0 94.6 232.0 1.01 57.4 154.8
  • acc values are model accuracies on the ImageNet dataset validation set.
    Reproduce by yolo val classify data=path/to/ImageNet device=0
  • Speed averaged over ImageNet val images using an Amazon EC2 P4d instance.
    Reproduce by yolo val classify data=path/to/ImageNet batch=1 device=0|cpu

Integrations

Our key integrations with leading AI platforms extend the functionality of Ultralytics' offerings, enhancing tasks like dataset labeling, training, visualization, and model management. Discover how Ultralytics, in collaboration with Roboflow, ClearML, Comet, Neural Magic and OpenVINO, can optimize your AI workflow.


Ultralytics active learning integrations

Roboflow ClearML ⭐ NEW Comet ⭐ NEW Neural Magic ⭐ NEW
Label and export your custom datasets directly to YOLOv8 for training with Roboflow Automatically track, visualize and even remotely train YOLOv8 using ClearML (open-source!) Free forever, Comet lets you save YOLOv8 models, resume training, and interactively visualize and debug predictions Run YOLOv8 inference up to 6x faster with Neural Magic DeepSparse

Ultralytics HUB

Experience seamless AI with Ultralytics HUB ⭐, the all-in-one solution for data visualization, YOLOv5 and YOLOv8 🚀 model training and deployment, without any coding. Transform images into actionable insights and bring your AI visions to life with ease using our cutting-edge platform and user-friendly Ultralytics App. Start your journey for Free now!

Ultralytics HUB preview image

Contribute

We love your input! YOLOv5 and YOLOv8 would not be possible without help from our community. Please see our Contributing Guide to get started, and fill out our Survey to send us feedback on your experience. Thank you 🙏 to all our contributors!

Ultralytics open-source contributors

License

Ultralytics offers two licensing options to accommodate diverse use cases:

  • AGPL-3.0 License: This OSI-approved open-source license is ideal for students and enthusiasts, promoting open collaboration and knowledge sharing. See the LICENSE file for more details.
  • Enterprise License: Designed for commercial use, this license permits seamless integration of Ultralytics software and AI models into commercial goods and services, bypassing the open-source requirements of AGPL-3.0. If your scenario involves embedding our solutions into a commercial offering, reach out through Ultralytics Licensing.

Contact

For Ultralytics bug reports and feature requests please visit GitHub Issues, and join our Discord community for questions and discussions!


Ultralytics GitHub space Ultralytics LinkedIn space Ultralytics Twitter space Ultralytics YouTube space Ultralytics TikTok space Ultralytics Instagram space Ultralytics Discord

json2yolo's People

Contributors

ashnair1 avatar glenn-jocher avatar laughing-q avatar pderrenger avatar sourcery-ai[bot] avatar ultralyticsassistant avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

json2yolo's Issues

KeyError: 'iscrowd'

I am trying to convert instances_train2017.json from https://cocodataset.org/#home and it returns the following error

Traceback (most recent call last): File "/tmp/workspace/codes/JSON2YOLO/general_json2yolo.py", line 393, in <module> convert_coco_json( File "/tmp/workspace/codes/JSON2YOLO/general_json2yolo.py", line 286, in convert_coco_json box = np.array(ann["bbox"], dtype=np.float64) KeyError: 'bbox'

Any idea on how to solve this error?

How to convert this format into yolov5/v7 compatible .txt file.

[{
2 " Id ": <-- int: label id -->,
3 " ObjectClassName ": <-- string : object class name -->,
4 " ObjectClassId ": <-- int: object class id as mentioned in
the Objectclasses . json -->,
5 " Left ": <-- int: left bbox coordinates -->,
6 " Top ": <-- int: top bbox coordinates -->,
7 " Right ": <-- int: right bbox coordinates -->,
8 " Bottom ": <-- int: bottom bbox coordinates -->
9 },
10 {
11 " Id ": 18111997,
12 " ObjectClassName ": " example ",
13 " ObjectClassId ": 7,
14 " Left ": 294,
15 " Top ": 115,
16 " Right ": 314,
17 " " Bottom ": 154
18 }]

Code is running but the results are not are not saved

Thanks for your code, I'm trying to convert the keypooints.json file to Yolo format using your code in colab. The code runs fine and creating a new folder with images, labels but yolo format labels are not saved. Can you please let me know what is the issue. Thanks

Having trouble converting labelbox to darknet

I have a dataset I labeled and exported from labelbox. I found this repo trying to find an easy way to convert the data to Darknet format, and I'm having some issues.

For starters, I specified that I was using labelbox and the filepath to my data (here).

if __name__ == '__main__':
    source = 'labelbox'

    if source is 'labelbox':  # Labelbox https://labelbox.com/
        convert_labelbox_json(name='darknet',
                              file='../myfile.json')

I ran the script with python run.py and have been getting errors caused by the format of the file itself. My data is actually a list (that is, all the json is enclosed within "[]"), so I got
TypeError: list indices must be integers, not str
Understandable. I enclosed the entire loop for writing images within a loop that went through the whole list of json dicts.

with open(file) as f:
        data = json.load(f)

# Write images and shapes
name = 'out' + os.sep + name
file_id, file_name, width, height = [], [], [], []
for j in data:
    for i, x in enumerate(tqdm(j['images'], desc='Files and Shapes')):
        file_id.append(x['id'])
        file_name.append('IMG_' + x['file_name'].split('IMG_')[-1])
        width.append(x['width'])
        height.append(x['height'])

        # filename
        with open(name + '.txt', 'a') as file:
            file.write('%s\n' % file_name[i])

        # shapes
        with open(name + '.shapes', 'a') as file:
        file.write('%g, %g\n' % (x['width'], x['height']))


Then there was a whole host of new issues because the naming within the json isn't consistent with the script. I don't know if labelbox changed the format of their data exports or if I made some glaring mistake somewhere, but it feels like I would have to essentially rewrite the entire script to get this to work correctly.

Converting fsco-dataset .json format into yolov8 acceptance format

Hi All!

I would like to convert below .json annotated file into yolov8 acceptance format, can somebody help me out ?

{ "description": "", "tags": [ { "id": 118615272, "tagId": 30143178, "name": "train", "value": null, "labelerLogin": "fsocov2", "createdAt": "2020-08-11T08:43:40.624Z", "updatedAt": "2020-08-11T08:43:40.624Z" } ], "size": { "height": 920, "width": 2872 }, "objects": [ { "id": 889978312, "classId": 9993511, "description": "", "geometryType": "rectangle", "labelerLogin": "fsocov2", "createdAt": "2020-08-11T08:17:13.366Z", "updatedAt": "2020-08-11T08:17:13.366Z", "tags": [], "classTitle": "blue_cone", "points": { "exterior": [ [ 2377, 198 ], [ 2398, 224 ] ], "interior": [] } }, { "id": 889978311, "classId": 9993511, "description": "", "geometryType": "rectangle", "labelerLogin": "fsocov2", "createdAt": "2020-08-11T08:17:13.366Z", "updatedAt": "2020-08-11T08:17:13.366Z", "tags": [], "classTitle": "blue_cone", "points": { "exterior": [ [ 2419, 189 ], [ 2436, 210 ] ], "interior": [] } }, { "id": 889978310, "classId": 9993511, "description": "", "geometryType": "rectangle", "labelerLogin": "fsocov2", "createdAt": "2020-08-11T08:17:13.366Z", "updatedAt": "2020-08-11T08:17:13.366Z", "tags": [], "classTitle": "blue_cone", "points": { "exterior": [ [ 2316, 212 ], [ 2345, 246 ] ], "interior": [] } },

And I need in <class_id> <x_center> <y_center> <width> <height>

Issue with saving txt file

I get following error when running script after having the path to coco.json file changed to path of my folder: FileNotFoundError: [Errno 2] No such file or directory: 'new_dir/labels/coco/data/video-GzdKTLbkG5F7gAunM-frame-000108-QHZmA4QTZCnzBG3HZ.txt'.

No Output

The Code just ran with no output.

different json format to YoloV5 format

Does anyone help me out to create code for converting JSON file like this format ({"geometry":{"type":"RECTANGLE","x":33.99378999750665,"y":20.792079207920793,"width":51.41843971631205,"height":55.44554455445545},"data":{"text":"NUT","id":1}}) to yolov5 txt file

unsupported operand type(s) for +: 'WindowsPath' and 'str'

Traceback (most recent call last):
  File "E:\JSON2YOLO-master\general_json2yolo.py", line 397, in <module>
    convert_vott_json(name='data',
  File "E:\JSON2YOLO-master\general_json2yolo.py", line 73, in convert_vott_json
    name = path + os.sep + name
TypeError: unsupported operand type(s) for +: 'WindowsPath' and 'str'

alternative method- COCO json to YOLO

For anyone else with the same problem:

I needed to consolidate multiple COCO json files (and corresponding images), sort out the class IDs, write yolo format annotation files, and separate into train/val/(test) datasets

I struggled with Python (mainly due to my limited skills) and also with applications such as fiftyone
I ended up using Pentaho and create a fairly simple transformation to convert multiple COCO datasets into one YOLO dataset

I'm sure the same could be done more easily in Python

Happy to share it- reply if you are interested

Andrew

image

test2

how is yolo segmentation format different than detection format?

Explanation of results

Hello, I trained a YOLOv8 model with 53 classes (all belonging to indoor environments) selected from the MS-COCO dataset. I trained the model for a 100 epochs with default settings. And these are the results I received.

training_results

Can someone please explain the results to me?

This is what I understand:

  • The mAP50 metrics is at 0.469. This means that the model is correctly identifying and localizing the objects about 46.9% of the time when a 50% overlap with the ground truth bounding boxes is considered a correct detection.
  • The mAP50-95, which is the average measure of model’s performance across IoU thresholds from 0.50 to 0.95 is at 0.331. This means the performance of the model drops when a stricter localization criterion is applied. This is a common issue because it is more challenging to have a high degree of overlap for correct detections, but it shows that the model has room for improvement in terms of precision of bounding box predictions.
  • In object detection, especially with a large number of classes (53 in this case), achieving high mAP values can be challenging. The mAP at IoU=0.5 is decent, suggesting that the model can detect objects with a fair amount of accuracy when a lower threshold for overlap is set.
  • The box loss is the bounding box regression loss which measures the error in predicted bounding box compared to the ground truth. Lower box loss means the predicted bounding boxes are more accurate. The training loss for box is 1.11 and validation is 1.125.
  • The classification loss (cls_loss) measures the error in the predicted class probabilities for each object in the image compared to the ground truth. Lower classification loss means the model is more accurately predicting the class of an object. The classification loss is 1.175 for training and 1.227 for validation.
  • The deformable convolutional layer loss (dfl_loss) measures the error in deformable convolutional layers, which are designed to improve model’s ability to detect objects with various scales and aspect ratios. A lower dfl_loss indicates that the model is better at handling object deformations and variations in appearance. The dfl loss for training is 1.179 and validation is 1.166.
  • All three losses are decreasing over epochs, which is a good sign indicating that the model is learning.
  • There's a significant drop early in training (before epoch 5), followed by a plateau, which is common as the model starts to converge.
  • The patterns for the validation loss are similar to the training losses, but the validation losses are generally higher than the training losses.

Did I miss anything in my understanding of the results? Can I improve the results? If so, how?

TypeError: string indices must be integers

Trying to run script labelbox_Json2yolo.py.
I am getting error line 22, in convert
im_path = img['Labeled Data']
TypeError: string indices must be integers

It creates a directory with the name of my json file and two sub directories - images and labels. Then i get the error.
all i did was change the bottom line to the path of my json file.

How do I use this script?

I have a coco.json file but when I run this script I've got the following error:

Traceback (most recent call last):
  File "./run.py", line 88, in <module>
    main(name, file)
  File "./run.py", line 60, in main
    with open('out/labels/' + label_name, 'a') as file:
FileNotFoundError: [Errno 2] No such file or directory: 'out/labels/IMG_https://storage.googleapis.com/labelbox-193903.appspot.com/cjvzre25678v10804t8epwim4%2F501af721-032d-33d3-e4b5-cc584ff45244-769a3159-1ecf-4f3e-aa8f-e09f0bde6014_LA.txt'

A question about coco128-seg.ymal

The train/val/test path in coco128-seg.ymal is as follows:

path: ./datasets/coco128-seg # dataset root dir
train: images/train2017 # train images (relative to 'path') 128 images
val: images/train2017 # val images (relative to 'path') 128 images
test: # test images (optional)

I mean that why the train path is same to the val path ?

A tutorial or some examples?

Hi, thanks for your code!

Is it possible to add some simple explainations of how to use the code? That will make things much easier. For example, where should the json file be placed? what is the argument? what are the two python files?

coco2yolo error

Annotations E:\PyCode\JSON2YOLO-master\datasets\coco\annotations\coco.json: 0%| | 0/4 [00:00<?, ?it/s]
Traceback (most recent call last):
File "E:\PyCode\JSON2YOLO-master\123.py", line 543, in
convert_coco_json('datasets/coco/annotations', # directory with *.json
File "E:\PyCode\JSON2YOLO-master\123.py", line 325, in convert_coco_json
with open((fn / f).with_suffix('.txt'), 'a') as file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'new_dir\labels\coco\JPEGImages\0.txt'

I only have coco.json and some images, but no txt. Json file was transferd by labelme's json.
coco.json
Here is my json. Thank you~
1
2

merge_multi_segment

Hello! I'm trying to change the general_json2yolo script for my case-use and I've run into a problem I haven't been able to solve so far. Basically, I don't really understand what the function merge_multi_segment does, or how the segment information should come for it to work. This is how the function is called:

s = merge_multi_segment(ann['segmentation'])

and ann['segmentation'] for me is a list such as:

['199',
'127',
'198',
'128',
'196',
'128',
'195',
'129',
'191',
'129',
'190',
'130',
'185',
'130',
'184',
'131',
'177',
'131',
'175',
'133',
'169',
'133',
'166',
'136',
'165',
'136',
'162',
'139',
'162',
'141',
'163',
'142',
'180',
'142',
'181',
'141',
'196',
'141',
'197',
'142',
'197',
'144',
'196',
'145',
'196',
'147',
'195',
'148',
'195',
'149',
'194',
'150',
'194',
'151',
'193',
'152',
'193',
'153',
'204',
'153',
'205',
'152',
'206',
'152',
'208',
'150',
'209',
'150',
'212',
'147',
'212',
'142',
'211',
'141',
'211',
'139',
'208',
'136',
'208',
'135',
'207',
'134',
'208',
'133',
'217',
'133',
'218',
'134',
'219',
'134',
'220',
'133',
'220',
'132',
'221',
'131',
'221',
'130',
'220',
'129',
'220',
'128',
'219',
'127']

Besides from changing that into integers, what should i take into account, and what are the segments that the merge_multi_segment refers to

No .yaml file?

Hi! I'm trying to train a YOLOv8 model through Ultralytics Hub.

My dataset is currently in a COCO format and I need it to be in the YOLO format to upload it to Ultralytics Hub.
I tried the general_json2yolo.py file for each of my train, valid, and test folders. It worked but it didn't seem to create a .yaml file automatically for the YOLO format. Is there any way to do that?

How to convert files annotated with linestrip in labelme to yolo

In normal circumstances, when we use rectangular boxes to label, we will obtain x1y1 and x2y2, convert the corresponding coordinates to cx cy w h. If I use linestrip to obtain the coordinate xy, how should I convert it to the corresponding coordinate. The corresponding image is as follows.
label
json

Convert the COCO RLE format to YOLOv5/v8 segmentation format.

Hi, thanks for your useful script.

We added rle2polygon() to general_json2yolo.py so that you can convert the COCO RLE format to YOLOv5/v8 segmentation format. Please let us know your opinion.
https://github.com/ryouchinsa/Rectlabel-support/blob/master/general_json2yolo.py

if use_segments:
    if len(ann['segmentation']) == 0:
        segments.append([])
        continue
    if isinstance(ann['segmentation'], dict):
        ann['segmentation'] = rle2polygon(ann['segmentation'])
    if len(ann['segmentation']) > 1:
        s = merge_multi_segment(ann['segmentation'])
        s = (np.concatenate(s, axis=0) / np.array([w, h])).reshape(-1).tolist()

def is_clockwise(contour):
    value = 0
    num = len(contour)
    for i, point in enumerate(contour):
        p1 = contour[i]
        if i < num - 1:
            p2 = contour[i + 1]
        else:
            p2 = contour[0]
        value += (p2[0][0] - p1[0][0]) * (p2[0][1] + p1[0][1]);
    return value < 0

def get_merge_point_idx(contour1, contour2):
    idx1 = 0
    idx2 = 0
    distance_min = -1
    for i, p1 in enumerate(contour1):
        for j, p2 in enumerate(contour2):
            distance = pow(p2[0][0] - p1[0][0], 2) + pow(p2[0][1] - p1[0][1], 2);
            if distance_min < 0:
                distance_min = distance
                idx1 = i
                idx2 = j
            elif distance < distance_min:
                distance_min = distance
                idx1 = i
                idx2 = j
    return idx1, idx2

def merge_contours(contour1, contour2, idx1, idx2):
    contour = []
    for i in list(range(0, idx1 + 1)):
        contour.append(contour1[i])
    for i in list(range(idx2, len(contour2))):
        contour.append(contour2[i])
    for i in list(range(0, idx2 + 1)):
        contour.append(contour2[i])
    for i in list(range(idx1, len(contour1))):
        contour.append(contour1[i])
    contour = np.array(contour)
    return contour

def merge_with_parent(contour_parent, contour):
    if not is_clockwise(contour_parent):
        contour_parent = contour_parent[::-1]
    if is_clockwise(contour):
        contour = contour[::-1]
    idx1, idx2 = get_merge_point_idx(contour_parent, contour)
    return merge_contours(contour_parent, contour, idx1, idx2)

def mask2polygon(image):
    contours, hierarchies = cv2.findContours(image, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_TC89_KCOS)
    contours_approx = []
    polygons = []
    for contour in contours:
        epsilon = 0.001 * cv2.arcLength(contour, True)
        contour_approx = cv2.approxPolyDP(contour, epsilon, True)
        contours_approx.append(contour_approx)

    contours_parent = []
    for i, contour in enumerate(contours_approx):
        parent_idx = hierarchies[0][i][3]
        if parent_idx < 0 and len(contour) >= 3:
            contours_parent.append(contour)
        else:
            contours_parent.append([])

    for i, contour in enumerate(contours_approx):
        parent_idx = hierarchies[0][i][3]
        if parent_idx >= 0 and len(contour) >= 3:
            contour_parent = contours_parent[parent_idx]
            if len(contour_parent) == 0:
                continue
            contours_parent[parent_idx] = merge_with_parent(contour_parent, contour)

    contours_parent_tmp = []
    for contour in contours_parent:
        if len(contour) == 0:
            continue
        contours_parent_tmp.append(contour)

    polygons = []
    for contour in contours_parent_tmp:
        polygon = contour.flatten().tolist()
        polygons.append(polygon)
    return polygons 

def rle2polygon(segmentation):
    if isinstance(segmentation["counts"], list):
        segmentation = mask.frPyObjects(segmentation, *segmentation["size"])
    m = mask.decode(segmentation) 
    m[m > 0] = 255
    polygons = mask2polygon(m)
    return polygons

COCO format

how to convert the YOLO format to COCO format?

Multi-class labeling

This is my labeled image.

image

 
This is xml annotation file from CVAT.

image
annotations.txt (download and change txt to xml)

 
This is converted txt file using cvat_to_cocoKeypoints.py

image
image_001.txt

 
Checking the converted txt file, the class numbers and bounding box information for cars and people are all the same. Does it support multi-class labeling?

custom dataset

Does it work for a custom coco2Json data-set? When I run it, it returns empty directories of images and labels. I am wondering whether it works only for the 81 to 91 classes of the standard coco data-set.

I am going to keep working with the other popular convert2Yolo repository for the moment, although this one also has some formatting deficiencies.

Thanks!

Solutions to 'make_dirs' problem in `labelbox_json2yolo.py` code

I'd like to convert my label file in *json to YOLO * txt with to class ('bsb','wsb') using the labelbox_json2yolo.py.

In my data (image and label file) I have:

file = "https://raw.githubusercontent.com/Leprechault/trash/main/YT_EMBRAPA_002.zip" # Patch to zip file in GitHub

import json
import os
from pathlib import Path

import requests
import yaml
from PIL import Image
from tqdm import tqdm

from utils import make_dirs


def convert(file, zip=True):
    # Convert Labelbox JSON labels to YOLO labels
    names = ['bsb','wsb']  # class names
    file = Path(file)
    save_dir = make_dirs(file.stem)
    with open(file) as f:
        data = json.load(f)  # load JSON

    for img in tqdm(data, desc=f'Converting {file}'):
        im_path = img['Labeled Data']
        im = Image.open(requests.get(im_path, stream=True).raw if im_path.startswith('http') else im_path)  # open
        width, height = im.size  # image size
        label_path = save_dir / 'labels' / Path(img['External ID']).with_suffix('.txt').name
        image_path = save_dir / 'images' / img['External ID']
        im.save(image_path, quality=95, subsampling=0)

        for label in img['Label']['objects']:
            # box
            top, left, h, w = label['bbox'].values()  # top, left, height, width
            xywh = [(left + w / 2) / width, (top + h / 2) / height, w / width, h / height]  # xywh normalized

            # class
            cls = label['value']  # class name
            if cls not in names:
                names.append(cls)

            line = names.index(cls), *xywh  # YOLO format (class_index, xywh)
            with open(label_path, 'a') as f:
                f.write(('%g ' * len(line)).rstrip() % line + '\n')

    # Save dataset.yaml
    d = {'path': f"../datasets/{file.stem}  # dataset root dir",
         'train': "images/train  # train images (relative to path) 128 images",
         'val': "images/val  # val images (relative to path) 128 images",
         'test': " # test images (optional)",
         'nc': len(names),
         'names': names}  # dictionary

    with open(save_dir / file.with_suffix('.yaml').name, 'w') as f:
        yaml.dump(d, f, sort_keys=False)

    # Zip
    if zip:
        print(f'Zipping as {save_dir}.zip...')
        os.system(f'zip -qr {save_dir}.zip {save_dir}')

    print('Conversion completed successfully!')


if __name__ == '__main__':
    convert('export-2021-06-29T15_25_41.934Z.json')

I have tried to run the script but I get the following error:

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
Cell In[6], line 10
      7 from PIL import Image
      8 from tqdm import tqdm
---> 10 from utils import make_dirs

ImportError: cannot import name 'make_dirs' from 'utils' (C:\Users\fores\anaconda3\lib\site-packages\utils\__init__.py)

Is possible to give a local directory in my machine and not call make_dirs(file.stem)? Please, any help with it?

RuntimeWarning

I run the python run.py

Annotations ../coco/annotations/train.json: 22%|▏| 98087/438409 [00:06<00:21, 1run.py:338: RuntimeWarning: invalid value encountered in true_divide

xml to coco

convert YOLO (darknet) datasets into JSON format

Unable to read images from GCP in labelbox_json2yolo.py

The images I annotated with Labelbox are stored in a GCP bucket. When I attempt to run the script, it cannot find the images in the Labelbox json because the image is stored on GCS.

How can this script be used when images aren't stored locally but instead in GCP?

This is the error I see:
[Errno 22] Invalid argument: 'gs://[bucket_name]/[folder_name]/[image_name].jpg'

Please help

Hi anyone i am new in python.Anyone can help on this error

Traceback (most recent call last):
File "/home/pi/JSON2YOLO/run.py", line 340, in
file='/home/pi/JSON2YOLO/export-coco.json')
File "/home/pi/JSON2YOLO/run.py", line 22, in convert_labelbox_json
for i, x in enumerate(tqdm(data['images'], desc='Files and Shapes')):
TypeError: list indices must be integers or slices, not str

May be this is a bug in save folder file names in txt files.

def image_folder2file(folder="images/"): # from utils import *; image_folder2file()

def image_folder2file(folder="images/"):  # from utils import *; image_folder2file()
    # write a txt file listing all imaged in folder
    s = glob.glob(f"{folder}/*.*")

    with open(f"{folder}.txt", "w") as file:
        for l in s:
            file.write(l + "\n")  # write image list

Converting the COCO keypoints format to YOLOv8 pose format.

We improved the script so that it converts the COCO keypoints format to YOLOv8 format.
https://github.com/ryouchinsa/Rectlabel-support/blob/master/general_json2yolo.py

if use_keypoints:
    k = (np.array(ann['keypoints']).reshape(-1, 3) / np.array([w, h, 1])).reshape(-1).tolist()
    k = box + k
    keypoints.append(k) 

The function show_kpt_shape_flip_idx() shows the kpt_shape and flip_idx for the yaml file.
Copy the 2 lines and paste to your yaml file.
Please let us know your opinion.

def show_kpt_shape_flip_idx(data):
    for category in data['categories']:
        if 'keypoints' not in category:
            continue
        keypoints = category['keypoints']
        num = len(keypoints)
        print('kpt_shape: [' + str(num) + ', 3]')
        flip_idx = list(range(num))
        for i, name in enumerate(keypoints):
            name = name.lower()
            left_pos = name.find('left')
            if left_pos < 0:
                continue
            name_right = name.replace('left', 'right')
            for j, namej in enumerate(keypoints):
                namej = namej.lower()
                if namej == name_right:
                    flip_idx[i] = j
                    flip_idx[j] = i
                    break
        print('flip_idx: [' + ', '.join(str(x) for x in flip_idx) + ']')

Converting Labelbox segmentation json to coco

{"ID":"xxx","DataRow ID":"xxx","Labeled Data":"https://xxx","Label":{"objects":[{"featureId":"xxx","schemaId":"xxx","color":"#1CE6FF","title":"xxx","value":"xxx","instanceURI":"https:/xxx"}],"classifications":[],"relationships":[]},"Created By":"xxx","Project Name":"xxx","Created At":"xxx","Updated At":"xxx","Seconds to Label":xxx,"Seconds to Review":0,"Seconds to Create":xxx,"External ID":"xxx","Global Key":null,"Agreement":-1,"Is Benchmark":0,"Benchmark Agreement":-1,"Benchmark ID":null,"Dataset Name":"xxx","Reviews":[],"View Label":"xxx","Has Open Issues":0,"Skipped":false,"DataRow Workflow Info":{"taskName":"Done","Workflow History":[{"actorId":"xxx","action":"MOVE","createdAt":"xxx"},{"actorId":"xxx","action":"MOVE","createdAt":"xxx","previousTaskId":"xxx","previousTaskName":"Initial labeling task","nextTaskId":"xxx","nextTaskName":"xxx"},{"actorId":"xxx","action":"MOVE","createdAt":"xxx","nextTaskId":"xxx","nextTaskName":"Initial labeling task"}]}}, {"ID":"xxx","DataRow ID":"xxx","Labeled Data": ...

Above is the way labelbox exports the segmentation labels, in labelbox terms they call it a Mask. Is there any example code on how this can be converted to work with yolo?
if not, would anyone be so kind to help out?

Segmentation Data Normalization for TrashCan Dataset

Greetings,
I have been trying to use YOLOv8 recently using TrashCan dataset
However, I still don't know how to normalize segmentation label data.

I look everywhere but I don't get any references about it.
Any ideas?

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.