reu2018dl / yolo-lite Goto Github PK
View Code? Open in Web Editor NEWAll the trained models used while developing YOLO-LITE
All the trained models used while developing YOLO-LITE
Excuse me, I'm new here.
I try to convert your project's tiny-yolov2-trial3-noBatch.cfg and tiny-yolov2-trial3-noBatch.weights into .h5 with yad2k.py, but I get the error 'Traceback (most recent call last):
File "yad2k.py", line 270, in
_main(parser.parse_args())
File "yad2k.py", line 156, in _main
buffer=weights_file.read(weights_size * 4))
TypeError: buffer is too small for requested array'.
BTW, I'm confused that it's ok when I change the files using tiny-yolov2-trial3.cfg and tiny-yolov2-trial3.weights, can you explain what's wrong?
hi!
I want to find the best voc cfg and weight, but it seem the link does not point to the file? can you help me ?
[best PASCAL cfg](https://github.com/reu2018DL/yolo-lite) | [best PASCAL weights](https://github.com/reu2018DL/yolo-lite)
Hello @rachuang22 and @Jped first of all congratulations for the paper, very enjoyable read.
I would like to ask you how did you get the weights for each architecture?
for example , if i want to remove one of the two 128 filter layers in your trial 3 NB to compare performance , what should i do with the weights? should i still use the original weights you provide and retrain my model? or should i retrain from scratch with random initialized weights for that architecture or what?
how was your process for getting weights for each try?
i dont understand how one can get a pair of untrained weights with darknet framework
hi. thanks for your share. I have some doubt: why is it faster without BN? When forward, BN layers will be mergered, why is it faster?
I have my own dataset and I was trying to train Yolo-Lite on that dataset. I followed the instructions on how to train Yolo v2 on custom objects but when I start training this error comes up.
./darknet detector train cfg/obj.data cfg/yoloc.cfg
yoloc
layer filters size input output
0 conv 16 3 x 3 / 1 224 x 224 x 3 -> 224 x 224 x 16 0.043 BFLOPs
1 max 2 x 2 / 2 224 x 224 x 16 -> 112 x 112 x 16
2 conv 32 3 x 3 / 1 112 x 112 x 16 -> 112 x 112 x 32 0.116 BFLOPs
3 max 2 x 2 / 2 112 x 112 x 32 -> 56 x 56 x 32
4 conv 64 3 x 3 / 1 56 x 56 x 32 -> 56 x 56 x 64 0.116 BFLOPs
5 max 2 x 2 / 2 56 x 56 x 64 -> 28 x 28 x 64
6 conv 128 3 x 3 / 1 28 x 28 x 64 -> 28 x 28 x 128 0.116 BFLOPs
7 max 2 x 2 / 2 28 x 28 x 128 -> 14 x 14 x 128
8 conv 35 3 x 3 / 1 14 x 14 x 128 -> 14 x 14 x 35 0.016 BFLOPs
9 max 2 x 2 / 2 14 x 14 x 35 -> 7 x 7 x 35
10 conv 256 3 x 3 / 1 7 x 7 x 35 -> 7 x 7 x 256 0.008 BFLOPs
11 conv 425 1 x 1 / 1 7 x 7 x 256 -> 7 x 7 x 425 0.011 BFLOPs
12 detection
darknet: ./src/parser.c:360: parse_region: Assertion `l.outputs == params.inputs' failed.
Aborted (core dumped)
Hello there,
I know little knowledge about deep learning. Actually, I just want to use yolo-lite directly on Linux environment. Can you give me some steps? Thanks a lot.
Why do I test tiny-yolov2-trial3 and tiny-yolov2-trial3-noBatch without speed difference?
What is your test command using darknet?Is the following command used?
darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights test.mp4 -out_filename res.avi
The device I am using:
ubuntu、i5cpu
The speed at which I tested both models was about 4FPS.
I can use opencv3 to load the model to reach 25FPS.
But the test speeds of the two models are the same
I open the live demo trained on VOC (https://reu2018dl.github.io/model_voc.html). But only real time video is showed and no predicted bbox result. Do I need do any configuration?
only demo?where is the train code???
Can I train and inference my own data use darknet, like to train and inference yolo v3 or Gaussian yolo v3?
How many parameters does YOLO-LITE trial 3 have? Since the model size does not give a reasonable estimate of how bug the model actually is, how many parameters does this network have including:
Since yolo-lite is similar to yolo-v2-tiny architecture. Can we use darkflow to train yolo-lite ?
the command
tempmAP = os.system("./darknet detector map data\voc.data {} {}\{}> out.txt".format(modelDir, weightsDir, filename))
does not function in linux.
What is the correct command in linux
Hi,
Congrats on getting YOLO to work at 10 FPS on a CPU. I want to know how to actually use? Which model and weights are you using to get it to work at 10 FPS?
Please let me know.
train from scratch or pretrained?
In the paper, the description table for architecture description line 3 says
Trial 2 Same Architecture as TYV, but input image size shrunk to 208x208
What is this TYV, there seems to be no reference to it in the paper.
Request you to please help me out here.
Thanks
-Prashant
I am trying to use the coco cfg and weights. But I think cfg and weights folders for voc, not coco. And for coco there is only one cfg and no weights? Can anyone explain it for me?
Thanks
Hello @rachuang22 @Jped
I want to train yolo-lite on my own dataset. I have a class and I want to use tiny-yolov2-trial3-noBatch
for cfg file and its pre trained weight: tiny-yolov2-trial3-noBatch.weights
I have a class so I need to change classes=1
and filters=30
.
Is it necessary to change anchors
? If your respond is yes how could I change it?
Also I tried to find anchors by this command in alexeyAB darknet repos.
./darknet detector calc_anchors data/obj.data -num_of_clusters ? -width 224 -height 224
Also I don't know the value of num_of_clusters
parameters.
Thanks for your contribution.
I have tested YOLO-Lite 3 month ago and I successfully trained your model for my dataset and anchors value were:
anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828
.
But currently, I want to train and test it again. Now I see that your anchors value changed to
anchors = 1.08,1.19, 3.42,4.41, 6.63,11.38, 9.42,5.11, 16.62,10.52
and I train my dataset for new cfg
file, but it couldn't detect any objects. It is possible to explain what's happened?
Hi reu2018DL,
i test your tiny-yolov2-trial3-noBatch.cfg and your tiny-yolov2-trial3-noBatch.weights on ubuntu16.04 using darkflow's application which you mentioned.
but i get 39fps speed, which is faster than your paper. so do you test on windows or is my method
wrong?
could you tell me how you calculate the fps?
Hi reu2018DL,
I use your tiny-yolov2-trial3-noBatch.cfg to train my datasets. I use the darknet of Yolo to train the model. I modify the learning_rate to 0.0001. And I modify the batch and subdivisions many times, but my model IOU is only 0.6. It is very difficult to improve it. My datasets is selected from voc datasets which includes people, car, motorbike, bus and bicycle, total 5 classes.
My main question is how to improve the IOU higher and make the model show better.Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.