Comments (13)
@balag59 ha, the visualizations are not there to look pretty, they are showing you the actual data used for training. If they are not correct, your training won't be correct. If you delete npy files and are still seeing problems, then your actual labels are likely problematic. I'm working on a major upgrade of label caching and loading. This should be done this week or next week.
In the meantime start from the tutorial, which works well, and make sure your custom dataset is aligned exactly the same format and directory structure as coco128. You can inspect coco128 in Kaggle datasets also:
https://www.kaggle.com/ultralytics/coco128
from yolov5.
Hi @balag59 if the number of images and labels of your custom dataset is less than 1000, there is no problem, but if you have more than 1000 images, you will need to fill your custom dataset with new or repeated data, so you end up with a custom dataset of size in multiples of 1000.
at least this workaround is working for me. hope it helps.
from yolov5.
@jinfagang unless you can you reproduce this on COCO, your labels are the problem.
from yolov5.
128
from yolov5.
You should clone the repo again, something is wrong with your local version.
from yolov5.
Also the colab notebook shows easy examples of training.
from yolov5.
@glenn-jocher Hi, I found it caused by I have not same length images and labels in the directory.
What should I do if I have multiple batchs data, I placed all images in a single folder and very batch label in a different dir, and every time I using a different batch, I will soft link that batch labels to labels
folder.
In this case, it can not load properly since their length not same, how to make it workable if I dont' want make many copies of images. (which waste disk space and not orgniseable)
from yolov5.
@jinfagang yes I understand. The repo caches labels into *.npy files for faster loading. Perhaps a rework of this is in order for greater flexibility.
A quick fix is to delete the *.npy files and use text labels instead, or to update the dataloader to ignore *.npy files:
Lines 332 to 341 in c14368d
from yolov5.
@jinfagang yes I understand. The repo caches labels into *.npy files for faster loading. Perhaps a rework of this is in order for greater flexibility.
A quick fix is to delete the *.npy files and use text labels instead, or to update the dataloader to ignore *.npy files:
Lines 332 to 341 in c14368d
@glenn-jocher Thank you for your work! I seem to be facing the same issue currently. Has there been any fix that's been attempted to resolve this? I've commented out the part in dataset.py where l is created from the npy file if labels_loaded=True so that l is created directly from the txt files. However, the visualized training batch 0 and 1 still seem to suffer from this issue.
Will this affect only the visualizations or will the predictions be affected too?
Is there anything else that can I do to ensure that this doesn't happen?
from yolov5.
@balag59 ha, the visualizations are not there to look pretty, they are showing you the actual data used for training. If they are not correct, your training won't be correct. If you delete npy files and are still seeing problems, then your actual labels are likely problematic. I'm working on a major upgrade of label caching and loading. This should be done this week or next week.
In the meantime start from the tutorial, which works well, and make sure your custom dataset is aligned exactly the same format and directory structure as coco128. You can inspect coco128 in Kaggle datasets also:
https://www.kaggle.com/ultralytics/coco128
@glenn-jocher Thank you for the prompt response! I have used your yolov3 implementation before on custom datasets extensively and it worked great and this preparation here is very similar to that except the yml part. This dataset is a collection of 4 datasets(one of them being coco) and there seems to be a weird issue where the image shown and the name for the image shown itself don't match. I've added a prefix to each of the other 3 datasets and none to coco so I know where each image is from. Could there be an issue of a mismatch between the images and labels? The number of images and labels in the train folder are not the same.I'm using a single class only( people class).Here is what the visualizations look like:
from yolov5.
Yes it may be related to a different number of images and labels. The yolov5 repo has updates that removed the need to create a list of images used for training, you can now simply supply an /images folder in your data.yaml file, and it will look for any labels in a corresponding /labels folder. This is how coco128 works.
from yolov5.
Yes it may be related to a different number of images and labels. The yolov5 repo has updates that removed the need to create a list of images used for training, you can now simply supply an /images folder in your data.yaml file, and it will look for any labels in a corresponding /labels folder. This is how coco128 works.
I'm aware and that is that is what I'm doing right now. I've just given the path to the images folder in the yml file. This is still happening.Would creating a list make any difference or is there something else that I need to try?
from yolov5.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
from yolov5.
Related Issues (20)
- ValueError: not enough values to unpack (expected 3, got 0) YOLOv5_obb HOT 5
- 提升训练速度 HOT 1
- is there a max limit to --imgsz ? HOT 6
- RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2 HOT 5
- How to show count in screen using yolov5 HOT 6
- How to change annotations indices in memory without changing the dataset locally? HOT 3
- How to add a button inside the video stream of yolov5. HOT 1
- Extract feature vector from the bounding box predicted together with the coordinates and class output vector HOT 4
- augmentation in validation HOT 1
- About detect.py HOT 9
- How to close window in yolov5 detection HOT 1
- Training YoloV5n on a custom dataset, best.pt is bigger than yolov5n official size HOT 4
- Data Augmentation HOT 1
- about eval.py HOT 1
- Need advice for training a YOLOv5-obb model HOT 2
- Code doubts about the model in the detection process HOT 2
- predicting from 2D array HOT 2
- Same yolov5s training, but one over-fitting and one training is very good. HOT 2
- Hello, I have some questions about the YOLOv5 code. Could you please help me answer them? HOT 2
- Different results from train.py and val.py HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yolov5.