Code Monkey home page Code Monkey logo

Comments (316)

synked16 avatar synked16 commented on April 28, 2024 4

@glenn-jocher
so can i fit a model with it?

from yolov5.

pfeatherstone avatar pfeatherstone commented on April 28, 2024 3

@glenn-jocher calling model = torch.hub.load('ultralytics/yolov5', 'yolov5l', pretrained=True) throws error:

Using cache found in /home/pf/.cache/torch/hub/ultralytics_yolov5_master
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 15, in <module>
    from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat, NMS, autoShape
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 8, in <module>
    from utils.datasets import letterbox
ModuleNotFoundError: No module named 'utils.datasets'; 'utils' is not a package

Process finished with exit code 1

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 3

@bipinkc19 you can send the model to a cuda device using normal pytorch methods: model.to(device).

YOLOv5 PyTorch Hub models automatically move images to the correct device if needed before inference.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 2

@rlalpha I've updated pytorch hub functionality now in c4cb785 to automatically append an NMS module to the model when pretrained=True is requested. Anyone using YOLOv5 pretrained pytorch hub models must remove this last layer prior to training now:
model.model = model.model[:-1]

Anyone using YOLOv5 pretrained pytorch hub models directly for inference can now replicate the following code to use YOLOv5 without cloning the ultralytics/yolov5 repository. In this example you see the pytorch hub model detect 2 people (class 0) and 1 tie (class 27) in zidane.jpg. Note there is no repo cloned in the workspace. Also note that ideally all inputs to the model should be letterboxed to the nearest 32 multiple. The second best option is to stretch the image up to the next largest 32-multiple as I've done here with PIL resize.
Screen Shot 2020-09-18 at 6 30 08 PM

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 2

@dan0nchik that's a good question. There's no capability for this currently. It would be nice to have something like results.save('path/to/dir') right?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 2

@dimzog @pravastacaraka PyTorch Hub provides a pathway for defining models, nothing more. What you do with that model is up to you, though you are required to create the functionality you want.

As I said, for a fully managed training solution I would recommend train.py.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@rlalpha @justAyaan @MohamedAliRashad this PyTorch Hub tutorial is now updated to reflect the simplified inference improvements in PR #1153. It's very simple now to load any YOLOv5 model from PyTorch Hub and use it directly for inference on PIL, OpenCV, Numpy or PyTorch inputs, including for batched inference. Reshaping and NMS are handled automatically. Example script is shown in above tutorial.

from yolov5.

EconML avatar EconML commented on April 28, 2024 1

@glenn-jocher Thank you for your prompt reply, and your tireless efforts!

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@jmanuelnavarro hub does not need network connectivity. Load the model with your first method as that's working.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@dan0nchik I've implemented your feature idea in PR #2179. You can now pass a directory to save results to:

results.save()  # save to current directory
results.save('path/to/dir')  # save to specific directory

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@kinoute yes it should, this is on a long TODO list of ours. If you'd like to contribute this feature feel free to submit a PR! A good example starting point is here:
https://github.com/WelkinU/yolov5-fastapi-demo/blob/270a9f6114ce1e54bd047221544178a419eef365/server.py#L71-L89

from yolov5.

morestart avatar morestart commented on April 28, 2024 1

@glenn-jocher My project is use multi video to detect object, the detect.py is a easy way to use, but the detect.py is too dependent on local files to load model and It's hard to scale. I think use hub to load model is a simple plan and it made my project look better. I will try change the LoadStreams class can use Hub model to inference.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@debparth see PyTorch Hub Tutorial, it's explained there.

Tutorials

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@xcalizorz hi good question! I've updated the tutorial above with details on how to filter inference results by class:

Inference Settings

Inference settings such as confidence threshold, NMS IoU threshold, and classes filter are model attributes, and can be modified by:

model.conf = 0.25  # confidence threshold (0-1)
model.iou = 0.45  # NMS IoU threshold (0-1)
model.classes = None  # (optional list) filter by class, i.e. = [0, 15, 16] for persons, cats and dogs

results = model(imgs, size=320)  # custom inference size

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@Lauler see Train Custom Data tutorial to get started with training:

YOLOv5 Tutorials

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@pravastacaraka PyTorch Hub can support any inference as long as you build a dataloader for it.

For a fully managed inference solution see detect.py.

from yolov5.

pravastacaraka avatar pravastacaraka commented on April 28, 2024 1

@glenn-jocher That answer didn't help me. I will clarify my question.

Can I train my dataset using the PyTorch hub instead of using train.py? Because based on the information you provide above there is a Training section:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?

results = model('path/to/my-dataset')

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024 1

@Dylan-H-Wang yes the current intended behavior for torch inputs is simply for the AutoShape() wrapper to act as a pass-through. No preprocess, posprocessing or NMS is done, and no results object is generated. This is the default use case in train.py, test.py, detect.py, and yolo.py.

yolov5/models/common.py

Lines 253 to 255 in ffb47ff

if isinstance(imgs, torch.Tensor): # torch
with amp.autocast(enabled=p.device.type != 'cpu'):
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference

from yolov5.

dllu avatar dllu commented on April 28, 2024 1

Hi @glenn-jocher, upon further debugging it seems to be a bug with Pytorch. Very strange --- I'll dig a bit further. pytorch/pytorch#58959

from yolov5.

MohamedAliRashad avatar MohamedAliRashad commented on April 28, 2024

Can someone use the training script with this configuration ?

from yolov5.

rlalpha avatar rlalpha commented on April 28, 2024

Can I ask about the meaning of the output?
How can I reconstruct as box prediction results via the output?
Thanks

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@rlalpha if want to run inference, put the model in .eval() mode, and select the first output. These are the predictions, which may then be filtered via NMS:
Screen Shot 2020-09-18 at 5 06 52 PM

from yolov5.

rlalpha avatar rlalpha commented on April 28, 2024

@rlalpha I've updated pytorch hub functionality now in c4cb785 to automatically append an NMS module to the model when pretrained=True is requested. Anyone using YOLOv5 pretrained pytorch hub models must remove this last layer prior to training now:
model.model = model.model[:-1]

Anyone using YOLOv5 pretrained pytorch hub models directly for inference can now replicate the following code to use YOLOv5 without cloning the ultralytics/yolov5 repository. In this example you see the pytorch hub model detect 2 people (class 0) and 1 tie (class 27) in zidane.jpg. Note there is no repo cloned in the workspace. Also note that ideally all inputs to the model should be letterboxed to the nearest 32 multiple. The second best option is to stretch the image up to the next largest 32-multiple as I've done here with PIL resize.
Screen Shot 2020-09-18 at 6 30 08 PM

I got how to do it now. Thank you for rapid reply.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@pfeatherstone thanks for the feedback! Can you try with force_reload=True? Without it the cached repo is used, which may be out of date.

import torch
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True)

from yolov5.

pfeatherstone avatar pfeatherstone commented on April 28, 2024

Still doesn't work. I get the following errors:

Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/pf/.cache/torch/hub/master.zip
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/yolo.py", line 15, in <module>
    from models.common import Conv, Bottleneck, SPP, DWConv, Focus, BottleneckCSP, Concat, NMS, autoShape
  File "/home/pf/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 8, in <module>
    from utils.datasets import letterbox
ModuleNotFoundError: No module named 'utils.datasets'; 'utils' is not a package
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/local/pycharm-2020.2/plugins/python/helpers/pydev/pydevd.py", line 1785, in stoptrace
    debugger.exiting()
  File "/usr/local/pycharm-2020.2/plugins/python/helpers/pydev/pydevd.py", line 1471, in exiting
    sys.stdout.flush()
ValueError: I/O operation on closed file.

Process finished with exit code 1

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@pfeatherstone I've raised a new bug report in #1181 for your observation. This typically indicates a pip package called utils is installed in your environment, you should pip uninstall utils.

from yolov5.

Semihal avatar Semihal commented on April 28, 2024

Hi!

I try load model and apply .to(device), but i receive exception: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@Semihal please raise a bug report with reproducible example code. Thank you.

from yolov5.

dagap avatar dagap commented on April 28, 2024

Is there a way to specify the NMS parameters on the pytorch hub model?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

NMS parameters are model.autoshape() attributes. You can modify them to whatever you want. i.e. model.conf = 0.5 before running inference.

yolov5/models/common.py

Lines 121 to 127 in 784feae

class autoShape(nn.Module):
# input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
img_size = 640 # inference size (pixels)
conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold
classes = None # (optional list) filter by class

from yolov5.

p9anand avatar p9anand commented on April 28, 2024

can we pass augment argument in at the time of inference?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@p9anand see the autoshape forward method for available arguments:

yolov5/models/common.py

Lines 131 to 138 in 94a7f55

def forward(self, imgs, size=640, augment=False, profile=False):
# supports inference from various sources. For height=720, width=1280, RGB images example inputs are:
# opencv: imgs = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(720,1280,3)
# PIL: imgs = Image.open('image.jpg') # HWC x(720,1280,3)
# numpy: imgs = np.zeros((720,1280,3)) # HWC
# torch: imgs = torch.zeros(16,3,720,1280) # BCHW
# multiple: imgs = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

Custom model loading has been simplified now with PyTorch Hub in PR #1677 🚀

Custom Models

This example loads a custom 20-class VOC-trained YOLOv5s model 'yolov5s_voc_best.pt' with PyTorch Hub.

model = torch.hub.load('ultralytics/yolov5', 'custom', path_or_model='yolov5s_voc_best.pt')
model = model.autoshape()  # for PIL/cv2/np inputs and NMS

from yolov5.

EconML avatar EconML commented on April 28, 2024

Where can I see the code for the results methods offered through pytorch hub? i.e results.print(), results.save(), etc

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@EconML results is a Detections() instance, defined in models/common.py:

yolov5/models/common.py

Lines 190 to 191 in c0ffcdf

class Detections:
# detections class for YOLOv5 inference results

from yolov5.

Lifeng1129 avatar Lifeng1129 commented on April 28, 2024

#I want to know how show the result in OpenCV cv.imshow

Now these results work well, as follows
results.print() # print results to screen
results.show() # display results
results.save() # save as results1.jpg, results2.jpg... etc.

But I want to know how to
cv2.imshow("Results", ????)

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@Lifeng1129 I've heard this request before, so I've created and merged a new PR #1897 to add this capability. To receive this update you'll need to force_reload your pytorch hub cache:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True, force_reload=True)

Then you can use the new results.render() method to return you a list of np arrays representing the original images annotated with the predicted bounding boxes. Note that cv2 usage of the images will require a RGB to BGR conversion, i.e.:

results = model(imgs)
im_list = results.render()
cv2.imshow(im_list[0][..., ::-1])  # show image 0 with RGB to BGR conversion

from yolov5.

jmanuelnavarro avatar jmanuelnavarro commented on April 28, 2024

@glenn-jocher , loading the model using torch.hub is a great functionality. Nevertheless, I am trying to deploy the custom trained model in an isolated environment and it's being weird...
I sucessfully generate the model using torch.hub, save it and load it again in the same py script (test.py):
imagen

However, when I try to load the model from the previously saved file in a second py script (test2.py), it fails:
imagen

Both scripts are in the same location:
imagen

What's the best way to do this?
Thanks in advance

from yolov5.

dan0nchik avatar dan0nchik commented on April 28, 2024

Hello! How can I save results to a folder?

from yolov5.

dan0nchik avatar dan0nchik commented on April 28, 2024

Great! Thank you very much!

from yolov5.

dan0nchik avatar dan0nchik commented on April 28, 2024

Hello again
Can you please add a flag or something to the display function, so original picture names would be saved?
For example:

results.save(save_orig_names=True) # save as results_zidane.jpg, results_bus.jpg... etc.

I've tried to implement that, but I couldn't test :(

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@dan0nchik yes that's a good point. I've thought of implementing this by default for use cases that allow it, i.e when a file or url, or PIL object is passed directly to model. For other cases this is not possible, such as when cv2 or torch images are passed in.

If you have some work started down this path perhaps you could submit a PR and I could review there?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@dan0nchik BTW, the main pytorch hub functionality is done with the autoShape() module that is in common.py, which generates Detections() class results, also in common.py:

class autoShape(nn.Module):

You would want to modify this line in particular, i.e. fname = .../self.fnames[i] if self.fnames else .../results{i}.jpg

f = Path(save_dir) / f'results{i}.jpg'

from yolov5.

dan0nchik avatar dan0nchik commented on April 28, 2024

If you have some work started down this path perhaps you could submit a PR and I could review there?

@glenn-jocher Yes, I've created PR #2194 and implemented that, but didn't test.

from yolov5.

kinoute avatar kinoute commented on April 28, 2024

Should the display/save/show function also displays the class names? Right now it only displays the bounding boxes. I can see there was something for that that was coded or started here:

yolov5/models/common.py

Lines 214 to 217 in c0ffcdf

img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np
for *box, conf, cls in pred: # xyxy, confidence, class
# str += '%s %.2f, ' % (names[int(cls)], conf) # label
ImageDraw.Draw(img).rectangle(box, width=4, outline=colors[int(cls) % 10]) # plot

from yolov5.

jalotra avatar jalotra commented on April 28, 2024

Hey @glenn-jocher Thanks for putting this awesome work together.
How to do this :

  1. I want to just detect a particular class, for example 0 or human beings. How to do this is there some interface provided ? I see that the autoShape class makes classes = None.
class autoShape(nn.Module):
    classes = None  # (optional list) filter by class

    # Then somewhere in NMS we use 
     nms(self.classes) 

Thanks

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@jalotra filter inference by class using the classes attribute. To detect only class 0, persons:

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
model.classes = [0]  # list of classes to detect

from yolov5.

morestart avatar morestart commented on April 28, 2024

can you give a demo that is use torch hub to load model inference rtsp video?
I use LoadStrames class to load a rtsp video

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True)
dataset = LoadStreams(self.source)

for path, img, im0s, vid_cap in dataset:

    results = model(img, size=640)
    results.show()

but i get this error
ValueError: axes don't match array

i think the error due to the LoadStreams's return, the first channel is batch,i see the hub source code is three channel

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@morestart for a fully managed rtsp streaming solution I would use python detect.py --source rtsp://yourstreamhere.

The PyTorch Hub model is a single-batch solution, so you'd have to pair it with a custom streamloader as in your example, except that LoadStreams() builds padded pytorch tensor batches rather than the original image inputs that the hub autoshape models typically handle. The hub model can run inference on torch tensors, however these are assumed to have unknown padding and thus pass through a different inference channel here that skips all postprocessing (i.e. does not return a results object)

yolov5/models/common.py

Lines 194 to 196 in 95aefea

if isinstance(imgs, torch.Tensor): # torch
return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference

from yolov5.

morestart avatar morestart commented on April 28, 2024

thanks for your reply! so i need change the LoadStreams channel to 3?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@morestart as I said for a fully managed solution simply use detect.py.

Torch Hub models are intended for integration into your own python projects, they are not intended for use with the detect.py dataloaders.

from yolov5.

debparth avatar debparth commented on April 28, 2024

@glenn-jocher How can I pass a confidence threshold when I'm loading the model from PyTorch Hub?

from yolov5.

valerietram88 avatar valerietram88 commented on April 28, 2024

thanks for your reply! so i need change the LoadStreams channel to 3?

@glenn-jocher My project is use multi video to detect object, the detect.py is a easy way to use, but the detect.py is too dependent on local files to load model and It's hard to scale. I think use hub to load model is a simple plan and it made my project look better. I will try change the LoadStreams class can use Hub model to inference.

Have you changed the LoadStreams class?

from yolov5.

morestart avatar morestart commented on April 28, 2024

@valerietram88 you can use detect.py, but change the load model part to torch hub

from yolov5.

bipinkc19 avatar bipinkc19 commented on April 28, 2024

@glenn-jocher How do I know the inferences happening in the torch hub model are happening in GPU.

We sending a list of np arrays, so how do we use GPU for inferences with pytorch hub?

from yolov5.

 avatar commented on April 28, 2024

Dear @glenn-jocher ,
How can use yolov3-tiny weights in hub.
I'm working on raspberry pi and want a good prediction speed and I think using tiny version I can achieve the optimal FPS.
Thank you.
Regards,
Asim

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@asim266 for YOLOv3 models you can use the ultralytics/yolov3 repo. See
https://github.com/ultralytics/yolov3#pytorch-hub

from yolov5.

 avatar commented on April 28, 2024

@glenn-jocher when I use the following line of code for yolov3-tiny
model = torch.hub.load('ultralytics/yolov3', 'yolov3-tiny', force_reload=True).autoshape()
it gives me the following error..

Downloading: "https://github.com/ultralytics/yolov3/archive/master.zip" to C:\Users\Asim/.cache\torch\hub\master.zip
Traceback (most recent call last):
File "E:/Face Mask 3Class Yolov5/new_hub.py", line 7, in
model = torch.hub.load('ultralytics/yolov3', 'yolov3-tiny', force_reload=True).autoshape()
File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 339, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 367, in _load_local
entry = _load_entry_from_hubconf(hub_module, model)
File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 187, in _load_entry_from_hubconf
raise RuntimeError('Cannot find callable {} in hubconf'.format(model))
RuntimeError: Cannot find callable yolov3-tiny in hubconf

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

PyTorch Hub model names do not support dashes, you need to use an underscore:
Screen Shot 2021-03-07 at 8 08 30 PM

from yolov5.

 avatar commented on April 28, 2024

@glenn-jocher when I use the following line of code :
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', pretrained=True, force_reload=True).autoshape()
It gives me error :
Traceback (most recent call last):
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 37, in create
attempt_download(fname) # download if not found locally
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\utils\google_utils.py", line 30, in attempt_download
tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]
IndexError: list index out of range

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:/Users/Asim/Desktop/Free Lance/New folder/camera-live-streaming/app.py", line 9, in
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', pretrained=True, force_reload=True).autoshape()
File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 339, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "C:\Users\Asim\anaconda3\lib\site-packages\torch\hub.py", line 368, in _load_local
model = entry(*args, **kwargs)
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 93, in yolov3_tiny
return create('yolov3-tiny', pretrained, channels, classes, autoshape)
File "C:\Users\Asim/.cache\torch\hub\ultralytics_yolov3_master\hubconf.py", line 51, in create
raise Exception(s) from e
Exception: Cache maybe be out of date, try force_reload=True. See https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading for help.

But if I use this line of code:
model = torch.hub.load('ultralytics/yolov3', 'yolov3_tiny', force_reload=True).autoshape()
It doesn't give any error but also does not detect anything.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@asim266 this the YOLOv5 PyTorch Hub tutorial. For questions about other repositories I would recommend you raise an issue there.

from yolov5.

bipinkc19 avatar bipinkc19 commented on April 28, 2024

@glenn-jocher
First of thank you for the fantastic community page and help from you guys.

One question:

How do I save the model locally and load the model from local file in torch-hub for yolov5. This is for case where there is no access to internet.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@bipinkc19 PyTorch Hub commands only need internet access the first time they are run, to download a cached copy of this repo. After this first time the cache is saved to disk and located for use in subsequent calls.

from yolov5.

deepconsc avatar deepconsc commented on April 28, 2024

Whoever struggles with the nms error with CUDA backend:

RuntimeError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend.

Update your torchvision to 0.8.1, that should resolve it. ðŸĪŸðŸž

from yolov5.

 avatar commented on April 28, 2024

Is there a way to specify a specific class like ["person", "cat"] to only identify person and cat?

from yolov5.

Lauler avatar Lauler commented on April 28, 2024

Could you please provide some more details on the Training section.

How does one properly pass the bounding box data and labels here when using a dataloader? Would very much appreciate an example with some skeleton code.

from yolov5.

Lauler avatar Lauler commented on April 28, 2024

@glenn-jocher Thanks. I had already read Train Custom Data. I was under the impression that loading model from torch.hub may have allowed more flexibility in allowing the user to specify their own Dataset similar to the PennFudanDataset in this tutorial: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html , since you don't expect users to clone the yolov5 repo. But I should still organize the data according to the Train Custom Data-guide?

I think ultimately at some point in the future it is easier if users of object detection libraries can organize their data however they want and create their own train/validation dataloaders (similar to image classification tasks) as opposed to being forced to shuffle image files in folders with specific format requirements.

This is just a general remark (don't take it as negative criticism) about the design API of object detection libraries versus what has become the standard in image classification. Object detection libraries are not very flexible in comparison, and hard to adapt to your own needs or your own validation schemes (cross validation).

I will use the official way as described!

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@Lauler you can use Hub models for any purpose including training. Hub models provide nothing else except a model, you must build your own training/inference infrastructure for whatever custom purposes you have.

Fully managed solutions for training, testing, and inference are also available in train.py, test.py, detect.py.

from yolov5.

ZixuanLingit666 avatar ZixuanLingit666 commented on April 28, 2024

why I set the parameter of force_reload=True and try many times, that under problem can't be solved?

image

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@rerester hi sorry to see you are having problems. It's hard to determine what your issue may be from the small screenshot you have pasted. If you believe you have a reproducible bug, raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!

from yolov5.

suyong2 avatar suyong2 commented on April 28, 2024

@glenn-jocher
I get an error "RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU." at the 'model = torch.hub.load()' line when I run my custom model (which was trained at GPU environment) at CPU environment.
How can I use GPU-trained custom model(which is trained by Yolov5 git version not pytorch hub) in CPU environment with pytorch hub?(of course, the custom model works well in GPU environment with pytorch hub)

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@suyong2 backend assignment is handled automatically in YOLOv5 PyTorch Hub models, so if you have a GPU your model will load there, if not it will load on CPU.

If this does not answer your question and you believe you have a reproducible issue, we suggest you raise a new issue using the 🐛 Bug Report template, providing screenshots and a minimum reproducible example to help us better understand and diagnose your problem. Thank you!

from yolov5.

pravastacaraka avatar pravastacaraka commented on April 28, 2024

@glenn-jocher, is PyTorch hub support video inference?

from yolov5.

xinxin342 avatar xinxin342 commented on April 28, 2024

@glenn-jocher Thanks for the tutorial.
I don't know why, but the last 3 lines of code don't work.
åąåđ•æˆŠå›ū 2021-04-28 200153

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@xinxin342 the last 3 lines work correctly.

In python outputs are suppressed. If you want to print outputs you can use the print() function.

from yolov5.

xinxin342 avatar xinxin342 commented on April 28, 2024

@glenn-jocher
Thank you for solving my question so quickly.

from yolov5.

rullisubekti avatar rullisubekti commented on April 28, 2024

Hallo @glenn-jocher, can i load yolov5 in my local directory with "torch.hub.load("mydir/yolov5/", "yolov5s")", i was try it but get error "too many values to unpack (expected 2)"

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@rullisubekti your code demonstrates incorrect usage. For correct usage read the tutorial above.

from yolov5.

PascalHbr avatar PascalHbr commented on April 28, 2024

Is there an easy way to make inference on my own model? When I try to follow the same steps, I run into all sorts of problems. Using a custom .pt file doesn't work out of the box. I have been trying to use the autoshape wrapper provided in common.py, but I get the following error (by the way, there is a bug in the autoshape class, self.stride is not defined)

RuntimeError: Sizes of tensors must match except in dimension 1. Got 18 and 17 in dimension 2 (The offending index is 1)

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@PascalHbr loading custom YOLOv5 models in PyTorch Hub is very easy, see the 'Custom Models' section in the above tutorial.

from yolov5.

pravastacaraka avatar pravastacaraka commented on April 28, 2024

@glenn-jocher how can run training with PyTorch hub? from your intructions:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch) use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Then what should I do?

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@pravastacaraka to train YOLOv5 models see Train Custom Data tutorial:

YOLOv5 Tutorials

from yolov5.

dimzog avatar dimzog commented on April 28, 2024

@glenn-jocher That answer didn't help me. I will clarify my question.

Can I train my dataset using the PyTorch hub instead of using train.py? Because based on the information you provide above there is a Training section:

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

I thought I could use a PyTorch hub to train my dataset. If so, how do I pass these model variables to my dataset? Is it like this?

results = model('path/to/my-dataset')

I think one should implement his own trainer(), correct me if i'm wrong @glenn-jocher .

from yolov5.

pravastacaraka avatar pravastacaraka commented on April 28, 2024

So the conclusion is that we can't do training using the PyTorch hub, right?

Training

To load a YOLOv5 model for training rather than inference, set autoshape=False. To load a model with randomly initialized ?> weights (to train from scratch), use pretrained=False.

model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False)  # load pretrained
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False)  # load scratch

Then what about the information provided above? is that meaningless? @dimzog @glenn-jocher

from yolov5.

Dylan-H-Wang avatar Dylan-H-Wang commented on April 28, 2024

When I was using the model loaded by torch.hub, it seems like print(), pandas() these nice functions only work when the input are non-tensor. If the input is tensor, the output will be a list. My question is:

  1. what does this list mean, how can I use them?
  2. Anyway to use pandas()if input data are tensors?
    Thank you!

from yolov5.

rullisubekti avatar rullisubekti commented on April 28, 2024

hello @glenn-jocher, I got some issue, when I run detect.py and load the model using torch.hub.load, with the same sample data and file weight. I get a different detection result and xyxy value return too, why? Thank you!

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@rullisubekti these two topics are seperate. detect.py is a fully managed inference solution that does not use the AutoShape() wrapper. YOLOv5 PyTorch Hub models are intended for your own custom python workflows and utilize the AutoShape() wrapper.

from yolov5.

Laudarisd avatar Laudarisd commented on April 28, 2024

Hello, everyone, I am stuck here, can anyone give me hints.
I tried to import custom model and get the prediction boxes as it is given in example.
I did this so far, it detects how many classes are there in the images but doesn't show xmin, ymin, xmax and ymax.

import cv2
import torch
from PIL import Image
import glob

#model
path = "./"
#model = torch.load('./last.pt')
model = torch.hub.load('ultralytics/yolov5', 'custom', path='./best.pt')  # custom model
CUDA_VISIBLE_DEVICES = "0"

model.conf = 0.25  # confidence threshold (0-1)
model.iou = 0.45  # NMS IoU threshold (0-1)

dataset_name = 'test_1'
test_img_path = './' + dataset_name + '/*.png'

test_imgs = sorted(glob.glob(test_img_path))
print(len(test_imgs))


for img in test_imgs:
    #print(img)
    #file_name = img.split('/')[-1]
    image = cv2.imread(img)
    img1 = Image.open(img)
    #print(img)
    img2 = cv2.imread(img)[:, :, ::-1]
    imgs = [img2]
    #print(img2)
    results = model(imgs, size = 640)
    results.print()
    results.xyxy[0]
    results.pandas().xyxy[0]

this is the result

5
image 1/1: 1023x1920 4 yess
Speed: 15.6ms pre-process, 27.5ms inference, 1.3ms NMS per image at shape (1, 3, 352, 640)
image 1/1: 1023x1920 6 yess
.........

Any help would be appreciated.

Thanks a lot.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@Laudarisd in Python if you want to see the contents of a variable you might want to print it's value.

from yolov5.

dllu avatar dllu commented on April 28, 2024

The first simple example doesn't seem to work...

(env) zxcv > cat wtf.py
#!/usr/bin/env python3
import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Image
img = 'https://ultralytics.com/images/zidane.jpg'

# Inference
results = model(img)

results.print()

(env) zxcv > ./wtf.py
Downloading: "https://github.com/ultralytics/yolov5/archive/master.zip" to /home/dllu/.cache/torch/hub/master.zip
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
Adding AutoShape...
YOLOv5 🚀 2021-5-25 torch 1.9.0.dev20210525+cu111 CUDA:0 (NVIDIA GeForce RTX 3090, 24234.625MB)

/home/dllu/zxcv/env/lib/python3.9/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1260.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Traceback (most recent call last):
  File "/home/dllu/zxcv/./wtf.py", line 13, in <module>
    results.print()
  File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 344, in print
    self.display(pprint=True)  # print results
  File "/home/dllu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 322, in display
    str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, "  # add to string
IndexError: list index out of range

EDIT: I deleted the line from .cache/torch/hub/ultralytics_yolov5_master/models/common.py, line 322 and now it works. Seems like a bug though.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@dllu both examples work correctly, just checked:

Screenshot 2021-05-26 at 00 49 28

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • ✅ Minimal – Use as little code as possible that still produces the same problem
  • ✅ Complete – Provide all parts someone else needs to reproduce your problem in the question itself
  • ✅ Reproducible – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

  • ✅ Current – Verify that your code is up-to-date with current GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been resolved by previous commits.
  • ✅ Unmodified – Your problem must be reproducible without any modifications to the codebase in this repository. Ultralytics does not provide support for custom code ⚠ïļ.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

from yolov5.

Laudarisd avatar Laudarisd commented on April 28, 2024

@dllu Actually I also encountered the same problem while doing inference in Docker. Strange thing is there is no problem when I run detect code in local. My local pc has UBUNTU 20.04. I guess this is a issue from pyhton version. But I am not sure.

from yolov5.

Laudarisd avatar Laudarisd commented on April 28, 2024

Hi @glenn-jocher here in Python if you want to see the contents of a variable you might want to print it's value.
could you give me some hints to visualize variables?

Thank you.

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@Laudarisd

x=1
print(x)

from yolov5.

lonnylundsten avatar lonnylundsten commented on April 28, 2024

Can we run inference on a video with YOLOv5 in PyTorch Hub?
If so, can you show a brief example of that.

#open video
vid1 = cv2.VideoCapture('/path/to/video.mp4')

#Inference
results = model(vid1, size=640)

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@lonnylundsten YOLOv5 PyTorch Hub inference is meant for integration into your own python workflows.

For a fully managed inference solution you can use detect.py.

from yolov5.

jmayank23 avatar jmayank23 commented on April 28, 2024

I used the command given in the documentation to load a custom model-
model = torch.hub.load('ultralytics/yolov5', 'custom', path='/content/yolov5/runs/train/yolov5s_results3/weights/best.pt') # default

But got the following error-
ImportError: cannot import name 'save_one_box' from 'utils.general' (/content/yolov5/utils/general.py)

Further I checked if that was the case but noticed that the function is there in general.py

Please help

Screenshot 2021-06-05 at 1 21 59 AM

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@jmayank23 👋 hi, thanks for letting us know about this problem with YOLOv5 🚀. We've created a few short guidelines below to help users provide what we need in order to get started investigating a possible problem.

How to create a Minimal, Reproducible Example

When asking a question, people will be better able to provide help if you provide code that they can easily understand and use to reproduce the problem. This is referred to by community members as creating a minimum reproducible example. Your code that reproduces the problem should be:

  • ✅ Minimal – Use as little code as possible that still produces the same problem
  • ✅ Complete – Provide all parts someone else needs to reproduce your problem in the question itself
  • ✅ Reproducible – Test the code you're about to provide to make sure it reproduces the problem

In addition to the above requirements, for Ultralytics to provide assistance your code should be:

  • ✅ Current – Verify that your code is up-to-date with current GitHub master, and if necessary git pull or git clone a new copy to ensure your problem has not already been resolved by previous commits.
  • ✅ Unmodified – Your problem must be reproducible without any modifications to the codebase in this repository. Ultralytics does not provide support for custom code ⚠ïļ.

If you believe your problem meets all of the above criteria, please close this issue and raise a new one using the 🐛 Bug Report template and providing a minimum reproducible example to help us better understand and diagnose your problem.

Thank you! 😃

from yolov5.

almog-gueta avatar almog-gueta commented on April 28, 2024

Hello,
I want to train the YOLOv5 model from scratch (not using the pretrained weights) on my own dataset and classes for a task of Face Mask Detection.

I have seen that in order to train I should load:
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', autoshape=False, pretrained=False) # load scratch

However, How do I actually train it? Can I use it as one layer in my model?

Thank you,
Almog

from yolov5.

glenn-jocher avatar glenn-jocher commented on April 28, 2024

@almog-gueta see Train Custom Data tutorial:

YOLOv5 Tutorials

from yolov5.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google âĪïļ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.