Code Monkey home page Code Monkey logo

yolo2-pytorch's Introduction

YOLOv2 in PyTorch

NOTE: This project is no longer maintained and may not compatible with the newest pytorch (after 0.4.0).

This is a PyTorch implementation of YOLOv2. This project is mainly based on darkflow and darknet.

I used a Cython extension for postprocessing and multiprocessing.Pool for image preprocessing. Testing an image in VOC2007 costs about 13~20ms.

For details about YOLO and YOLOv2 please refer to their project page and the paper: YOLO9000: Better, Faster, Stronger by Joseph Redmon and Ali Farhadi.

NOTE 1: This is still an experimental project. VOC07 test mAP is about 0.71 (trained on VOC07+12 trainval, reported by @cory8249). See issue1 and issue23 for more details about training.

NOTE 2: I recommend to write your own dataloader using torch.utils.data.Dataset since multiprocessing.Pool.imap won't stop even there is no enough memory space. An example of dataloader for VOCDataset: issue71.

NOTE 3: Upgrade to PyTorch 0.4: #59

Installation and demo

  1. Clone this repository

    git clone [email protected]:longcw/yolo2-pytorch.git
  2. Build the reorg layer (tf.extract_image_patches)

    cd yolo2-pytorch
    ./make.sh
  3. Download the trained model yolo-voc.weights.h5 (link updated) and set the model path in demo.py

  4. Run demo python demo.py.

Training YOLOv2

You can train YOLO2 on any dataset. Here we train it on VOC2007/2012.

  1. Download the training, validation, test data and VOCdevkit

    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar
  2. Extract all of these tars into one directory named VOCdevkit

    tar xvf VOCtrainval_06-Nov-2007.tar
    tar xvf VOCtest_06-Nov-2007.tar
    tar xvf VOCdevkit_08-Jun-2007.tar
  3. It should have this basic structure

    $VOCdevkit/                           # development kit
    $VOCdevkit/VOCcode/                   # VOC utility code
    $VOCdevkit/VOC2007                    # image sets, annotations, etc.
    # ... and several other directories ...
  4. Since the program loading the data in yolo2-pytorch/data by default, you can set the data path as following.

    cd yolo2-pytorch
    mkdir data
    cd data
    ln -s $VOCdevkit VOCdevkit2007
  5. Download the pretrained darknet19 model (link updated) and set the path in yolo2-pytorch/cfgs/exps/darknet19_exp1.py.

  6. (optional) Training with TensorBoard.

    To use the TensorBoard, set use_tensorboard = True in yolo2-pytorch/cfgs/config.py and install TensorboardX (https://github.com/lanpa/tensorboard-pytorch). Tensorboard log will be saved in training/runs.

  7. Run the training program: python train.py.

Evaluation

Set the path of the trained_model in yolo2-pytorch/cfgs/config.py.

cd faster_rcnn_pytorch
mkdir output
python test.py

Training on your own data

The forward pass requires that you supply 4 arguments to the network:

  • im_data - image data.
    • This should be in the format C x H x W, where C corresponds to the color channels of the image and H and W are the height and width respectively.
    • Color channels should be in RGB format.
    • Use the imcv2_recolor function provided in utils/im_transform.py to preprocess your image. Also, make sure that images have been resized to 416 x 416 pixels
  • gt_boxes - A list of numpy arrays, where each one is of size N x 4, where N is the number of features in the image. The four values in each row should correspond to x_bottom_left, y_bottom_left, x_top_right, and y_top_right.
  • gt_classes - A list of numpy arrays, where each array contains an integer value corresponding to the class of each bounding box provided in gt_boxes
  • dontcare - a list of lists

License: MIT license (MIT)

yolo2-pytorch's People

Contributors

cory8249 avatar crazylyf avatar longcw avatar snowmasaya avatar wilrich-msft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yolo2-pytorch's Issues

Training Stuck on epoch XXXXXXX start

Hi,
This might just be because I'm using Python 3.6, but when I try and run train.py I just get stuck in an endless amount of:
epoch 1238472147 start....

Is there any way to fix this with imdb.py or would I have to write my own version to load the dataset?

Train on the text dataset

@longcw @cory8249 @crazylyf I am planing to train the YOLO2 on the SynthText dataset, so when I prepare the dataset, I need to resize the image into 416416 and rearrange the channel to make sure the shape of the image is 3416*416, right? Besides these work, are there any other work I need to do?

error locating src/cuda/roi_pooling_kernel.cu.o

Hello,
I am trying to build the project by executing ./make.sh
But I'm having the following error locating the file roi_pooling_kernel.cu.o

I am using:
gcc 4.9
CUDA Version 8.0.61
CudNN 6.0.21

Could not clone the repository

Cloning into 'yolo2-pytorch'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Kindly help me !!

License

Can this implementation be used for research purposes?

train class loss is always around 0.999?

emm, I have question about train cls_loss ,when I train on my own data ,the class loss can't down in 0.999,I check my data, and even change the learnning rate , but is not work, could you give me some advise? Thank you very much.

epoch 0[1/1665], loss: 55.147, bbox_loss: 0.736, iou_loss: 53.411, cls_loss: 0.999 (25.21 s/batch, rest:11:39:05) , load data: 18.786
epoch 0[11/1665], loss: 20.966, bbox_loss: 0.339, iou_loss: 19.628, cls_loss: 0.999 (3.45 s/batch, rest:1:34:58) , load data: 0.177
epoch 0[21/1665], loss: 3.289, bbox_loss: 0.270, iou_loss: 2.020, cls_loss: 0.999 (4.04 s/batch, rest:1:50:39) , load data: 0.110
epoch 0[31/1665], loss: 2.860, bbox_loss: 0.230, iou_loss: 1.641, cls_loss: 0.990 (3.96 s/batch, rest:1:47:47) , load data: 4.147
epoch 0[41/1665], loss: 2.507, bbox_loss: 0.203, iou_loss: 1.305, cls_loss: 0.999 (4.80 s/batch, rest:2:09:56) , load data: 2.297
epoch 0[51/1665], loss: 2.242, bbox_loss: 0.201, iou_loss: 1.042, cls_loss: 0.999 (4.88 s/batch, rest:2:11:17) , load data: 0.082
epoch 0[61/1665], loss: 2.071, bbox_loss: 0.192, iou_loss: 0.880, cls_loss: 0.999 (3.81 s/batch, rest:1:41:55) , load data: 0.070
epoch 0[71/1665], loss: 2.177, bbox_loss: 0.179, iou_loss: 0.999, cls_loss: 0.999 (3.75 s/batch, rest:1:39:43) , load data: 3.230
epoch 0[81/1665], loss: 1.967, bbox_loss: 0.148, iou_loss: 0.819, cls_loss: 0.999 (6.42 s/batch, rest:2:49:34) , load data: 0.141
epoch 0[91/1665], loss: 2.064, bbox_loss: 0.172, iou_loss: 0.893, cls_loss: 0.999 (3.82 s/batch, rest:1:40:12) , load data: 0.124
epoch 0[101/1665], loss: 1.991, bbox_loss: 0.154, iou_loss: 0.838, cls_loss: 0.999 (3.38 s/batch, rest:1:28:13) , load data: 0.221
epoch 0[111/1665], loss: 1.915, bbox_loss: 0.180, iou_loss: 0.736, cls_loss: 0.999 (4.16 s/batch, rest:1:47:37) , load data: 0.123
epoch 0[121/1665], loss: 1.969, bbox_loss: 0.178, iou_loss: 0.792, cls_loss: 0.999 (3.31 s/batch, rest:1:25:13) , load data: 0.089
epoch 0[131/1665], loss: 1.880, bbox_loss: 0.152, iou_loss: 0.729, cls_loss: 0.999 (4.10 s/batch, rest:1:44:56) , load data: 0.147
epoch 0[141/1665], loss: 1.845, bbox_loss: 0.147, iou_loss: 0.700, cls_loss: 0.999 (3.48 s/batch, rest:1:28:20) , load data: 2.927
epoch 0[151/1665], loss: 1.721, bbox_loss: 0.175, iou_loss: 0.556, cls_loss: 0.990 (4.85 s/batch, rest:2:02:28) , load data: 0.217

Does not build.

Executing the make.sh script generates this error:

gcc: error: /path/to/my/clone/yolo2-pytorch/layers/reorg/src/reorg_cuda_kernel.cu.o: No such file or directory

Any idea what would cause this?

Train an mAP 0.71 model by modifying 'mask' & 'scale'

I traced YOLOv2 C code last few days, I think there is a misunderstanding about 'mask' and 'scale'.

In this pytorch repo, the mask is used for loss function. It helps the network to focus on correct anchor boxes, instead of punishing other irrelevant boxes.
self.iou_loss = nn.MSELoss(size_average=False)(iou_pred * iou_mask, _ious * iou_mask) / num_boxes

So how to calculate right scale_mask ?

YOLO's mask is based on predicted objectness(0~1) for the box
So, if the box's predicted objectness is high (e.g. 0.9). But there are no ground-truth in that position. It should be punished. The punishment = noobject_scale * (0 - predicted objectness)
l.delta[obj_index] = l.noobject_scale * (0 - l.output[obj_index]);
Hence, this function help network learns to give reasonable confidence on the box

However, in this repo
_iou_mask[best_ious <= cfg.iou_thresh] = cfg.noobject_scale
dose not consider objectness. It punishes every unqualified box with the same value. Hence the detector learn very poor about objectness

Here is the most obvious one, other 'mask' and 'scale' are also implemented wrong way. And acutally YOLO has more complicated policy about these scale_mask. (some if-else conditions). I also find that YOLO's the loss is calculated before 'exp() and log(), not after.

By fixing scale_mask bug, VOC07 test mAP (trained on VOC07+12 trainval) increases from 0.67 to 0.71. Which is much closer to yolo-voc-weights.h5 (0.7221)

You can refer to my code darknet_v2.py. Though I am still debugging, not completed yet. Just for pointing out what I found.

Does self.global_average_pool do anything?

The darknet model defines a global average pooling layer as follows:

        # linear
        out_channels = cfg.num_anchors * (cfg.num_classes + 5)
        self.conv5 = net_utils.Conv2d(c4, out_channels, 1, 1, relu=False)
        self.global_average_pool = nn.AvgPool2d((1, 1))

The forward func uses it as such:

        conv4 = self.conv4(cat_1_3)
        conv5 = self.conv5(conv4)   # batch_size, out_channels, h, w
        global_average_pool = self.global_average_pool(conv5)

However, the kernel size of nn.AvgPool2d is 1x1. I'm confused as to what --- if anything --- this is doing. It seems like a no-op. When stepping through the code I've confirmed that np.all(conv5 == gapooled) is True.

Is this a bug?

TypeError with Python 3.5 / PyTorch 0.1.10_2

Hi,

I've converted a bunch of .py files to Python 3 and am getting now:

(py35) ~/yolo2-pytorch$ python demo.py                                                                                   
load model succ...                                                                                                                          
Traceback (most recent call last):                                                                                                          
  File "demo.py", line 49, in <module>                                                                                                      
    bbox_pred, iou_pred, prob_pred = net(im_data)                                                                                           
  File "/home/wilrich/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 202, in __call__                              
    result = self.forward(*input, **kwargs)                                                                                                 
  File "/home/wilrich/yolo2-pytorch/darknet.py", line 177, in forward                                                                       
    conv1s_reorg = self.reorg(conv1s)                                                                                                       
  File "/home/wilrich/anaconda3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 202, in __call__                              
    result = self.forward(*input, **kwargs)                                                                                                 
  File "/home/wilrich/yolo2-pytorch/layers/reorg/reorg_layer.py", line 49, in forward                                                       
    x = ReorgFunction(self.stride)(x)                                                                                                       
  File "/home/wilrich/yolo2-pytorch/layers/reorg/reorg_layer.py", line 15, in forward                                                       
    out = torch.FloatTensor(bsize, out_c, out_h, out_w)                                                                                     
TypeError: torch.FloatTensor constructor received an invalid combination of arguments - got (int, int, float, float), but expected one of:  
 * no arguments                                                                                                                             
 * (int ...)                                                                                                                                
      didn't match because some of the arguments have invalid types: (int, int, float, float)                                               
 * (torch.FloatTensor viewed_tensor)                                                                                                        
 * (torch.Size size)                                                                                                                        
 * (torch.FloatStorage data)                                                                                                                
 * (Sequence data)                                                                                                                          

Thanks for having a look,
wr

How to use COCO model ?

Hi @longcw ,

Thank you for sharing this awesome implementations. Actually I found this is the only one which runs as fast as YOLO's original darknet C code. I have some questions:

Is is possible to use 80-class COCO instead of 20-class PASCAL ?
How to convert pre-trained PASCAL weight to HDF5 file?
Or it is train by yourself (not converted from YOLO's original one) ?

Thank you.

How to use a custom dataset?

I would like to retrain the network on a custom dataset of food items. I'm sort of new to this. Are there instructions about how to do it? I am looking at torch.utils.data.Dataset but am not sure how it links to the rest of the program.

No module named gpu_nms

Hello I am new in yolov2, when I just run python demo.py I got this error. Compiling run successfully.
image
Anybody know how to fix this?
-Thank you-

size_index is missing for preprocess_test function

When trying the demo, I found that preprocess_test has two arguments (data, size_index) while in the demo.py file, only one argument is passed yolo_utils.preprocess_test((image, None, cfg.inp_size))[0] , how to fix this ?

Thanks.

cpu only

Plan to run this on Raspberry3 board. Does you code run on a CPU only platform?

Thanks,

next_batch: still epoch xxx start

When I run python demo.py, I always get the epoch xxx start..., and the memroy up!~ I see the imdb.py , it's confuse to me.
Could everyone give me some advice?
(I use python3.6)

yolo_to_bbox function discards bounding boxes

Hi,

When I try the demo I always get
Traceback (most recent call last): File "demo.py", line 63, in <module> bbox_pred, iou_pred, prob_pred, image.shape, cfg, thresh) File "/export/mlrg/aelnouby/projects/3rdParty/yolo2-pytorch/utils/yolo.py", line 108, in postprocess num_classes = cfg.num_classes IndexError: index 571 is out of bounds for axis 0 with size 500

When I started debugging I found that yolo_to_bbox function call discards bounding boxes, while the input shape is (1, 165, 5, 4) , the output shape is (1, 100, 5, 4), which causes an out of index error as shown above. How to fix this issue ?

Values of Anchor Box genereted

Hi ,
Generator_anchor_box file generate 10 values of anchors , i have question about these values , as we have 5 anchors and this generator generate 10 values, more likely a first two of 10 values related to first anchor box , right ? if so , what are means of these two values ? W , H for first anchors for aspect ratio and scale for that anchor?

mAP

Have you ever evaluate the transformed trained model in VOC2007? I've tried your code and got a 71.9 mAP while the original is 76.8. Then I found a tiny error in test code, after fixing the result up to 72.8 mAP, still not enough...

opencv error when running demo.py

after running python demo.py, an error occur.

I install all the packages in anaconda's pytorch env. my opencv is 2.4.10( conda install opencv=2.4.10)

OpenCV Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp, line 501
Traceback (most recent call last):
  File "demo.py", line 65, in <module>
    cv2.imshow('test', im2show)
cv2.error: -------src-dir-------/opencv-2.4.10/modules/highgui/src/window.cpp:501: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

ImportError: dynamic module does not define module export function (PyInit_cython_bbox)

python 3.6
torch 0.3.0
When I run the train.py
lxt@slave-01:~/pytorch/yolo2-pytorch$ python train.py
Traceback (most recent call last):
File "train.py", line 8, in
from darknet import Darknet19
File "/home/lxt/pytorch/yolo2-pytorch/darknet.py", line 9, in
from utils.cython_bbox import bbox_ious, bbox_intersections, bbox_overlaps, anchor_intersections
ImportError: dynamic module does not define module export function (PyInit_cython_bbox)

Memory increase?

I compile and run the demo.py success. But when I run the train.py on GPU mode and I found the memory always increase, which make the train process killed. Can any one help. thx!

cudaCheckError() failed : invalid device function

Operating System:ubuntu14.04
cuda8.0 is tested well

When I run demo.py, I got the following error message:

load model succ...
X server found. dri2 connection failed!
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
X server found. dri2 connection failed!
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware
(If you have multiple ICDs installed and OpenCL works, you can ignore this message)
cudaCheckError() failed : invalid device function

Could anyone provide some advices? Thank you for help.

File accessibilty: Unable to open file

Hello I am trying to train YOLO using VOC dataset. I follow your instruction. I have successfully run and save the model, but when I entering epoch 5 in iteration 305/313, I got an error because of file accessibility. Here is the complete error. Tho model has been saved until epoch 5.
image
How can I fixed it?
-Thank you-

Upgrade train.py to pytorch 0.4.0

The current training script breaks with the new version of pytorch. The fix is to replace lines 88-92 of train.py with:

    if torch.__version__.startswith('0.3'):
        bbox_loss += net.bbox_loss.data.cpu().numpy()[0]
        iou_loss += net.iou_loss.data.cpu().numpy()[0]
        cls_loss += net.cls_loss.data.cpu().numpy()[0]
        train_loss += loss.data.cpu().numpy()[0]
    else:
        bbox_loss += float(net.bbox_loss.data.cpu().numpy())
        iou_loss += float(net.iou_loss.data.cpu().numpy())
        cls_loss += float(net.cls_loss.data.cpu().numpy())
        train_loss += float(loss.data.cpu().numpy())

How to train my own data

hmm....I have to ask this old question for yolo2. If I have custom data, How to modify the framework?

Box_Mask question

A question about the implementation of targets in this project. Why do the default box targets have values of 0.5 for x, y and 1 for width and height? And the box mask has default weights of 0.01? Does this provide incentive for predicted boxes that are unmatched to ground truth to simply predict prior boxes? Is this in the original implementation?

cuda runtime error : invalid device function

Hello,
I'm currently working on project using YOLO v2 as a base and am very interested in using your pytorch implementation as a starting point.
I've run into a strange issue right from the start however when running the test and demo scripts.

The error is:
RuntimeError: cuda runtime error (8) : invalid device function at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/THC/THCTensorCopy.cu:204

Oddly, when running the demo script, the first image with the computer appears with detection boxes and the error only happens after hitting the key and trying to move to the next image.

I've already changed the architecture in make.sh to sm_30 which is what my video card is compatible with. Have you run into this kind of issue before? Perhaps there is another architecture setting I'm missing somewhere or maybe it has to do with my install of pytorch itself...

Let me know if you have any ideas. Once I get this running I hope to port over your mAP scoring code to pjreddie's implementation and compare scores.

error locating .cu files for building reorg layer

Hi,

This is an exciting project! Unfortunately, I've encountered some errors while getting it running locally...

While trying to build the reorg layer from make.sh (per instructions in the readme), gcc throws the following errors:

gcc: error: reorg_cuda_kernel.cu: No such file or directory
though that file does exist where it is searching for it.

and

gcc: error: /home/tyler/yolo2-pytorch/layers/roi_pooling/src/cuda/roi_pooling.cu.o: No such file or directory
and indeed this one is not present, though I assume this could be caused by the first failure?

gcc and nvcc info if that helps

gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3)
Cuda compilation tools, release 8.0, V8.0.44

Let me know if I there's more info that would be useful.

exp(w) and exp(h)

Hi, I cannot understand why have you done these changes? Why the exp of w and h is not needed?

  •    wh_pred = torch.exp(conv5_reshaped[:, :, :, 2:4])
    
  •    # wh_pred = torch.exp(conv5_reshaped[:, :, :, 2:4])
    
  •    wh_pred = conv5_reshaped[:, :, :, 2:4]
    

Thank you

Nan losses during training

I've been interested in using YOLO architecture for an object detection task. As a first step I decided to clone this repo and run the examples. Upon running the train.py I get these results...

Note that I had to compile the 'roi_pooling' and 'reorg' modules (not 100% sure what these are for) with slightly different flags (-arch=sm_30) to match my laptop's GPU. In the future I intend to use AWS for full-scale training. Also I'm a new to the PyTorch framework.

/usr/bin/python2.7 /home/hal9000/Sources/yolo2-pytorch/train.py
voc_2007_trainval gt roidb loaded from /home/hal9000/Sources/yolo2-pytorch/data/cache/voc_2007_trainval_gt_roidb.pkl
load data succ...
('0-convolutional/kernel:0', (32L, 3L, 3L, 3L), (3, 3, 3, 32))
('0-convolutional/gamma:0', (32L,), (32,))
('0-convolutional/biases:0', (32L,), (32,))
('0-convolutional/moving_mean:0', (32L,), (32,))
('0-convolutional/moving_variance:0', (32L,), (32,))
('1-convolutional/kernel:0', (64L, 32L, 3L, 3L), (3, 3, 32, 64))
('1-convolutional/gamma:0', (64L,), (64,))
('1-convolutional/biases:0', (64L,), (64,))
('1-convolutional/moving_mean:0', (64L,), (64,))
('1-convolutional/moving_variance:0', (64L,), (64,))
('2-convolutional/kernel:0', (128L, 64L, 3L, 3L), (3, 3, 64, 128))
('2-convolutional/gamma:0', (128L,), (128,))
('2-convolutional/biases:0', (128L,), (128,))
('2-convolutional/moving_mean:0', (128L,), (128,))
('2-convolutional/moving_variance:0', (128L,), (128,))
('3-convolutional/kernel:0', (64L, 128L, 1L, 1L), (1, 1, 128, 64))
('3-convolutional/gamma:0', (64L,), (64,))
('3-convolutional/biases:0', (64L,), (64,))
('3-convolutional/moving_mean:0', (64L,), (64,))
('3-convolutional/moving_variance:0', (64L,), (64,))
('4-convolutional/kernel:0', (128L, 64L, 3L, 3L), (3, 3, 64, 128))
('4-convolutional/gamma:0', (128L,), (128,))
('4-convolutional/biases:0', (128L,), (128,))
('4-convolutional/moving_mean:0', (128L,), (128,))
('4-convolutional/moving_variance:0', (128L,), (128,))
('5-convolutional/kernel:0', (256L, 128L, 3L, 3L), (3, 3, 128, 256))
('5-convolutional/gamma:0', (256L,), (256,))
('5-convolutional/biases:0', (256L,), (256,))
('5-convolutional/moving_mean:0', (256L,), (256,))
('5-convolutional/moving_variance:0', (256L,), (256,))
('6-convolutional/kernel:0', (128L, 256L, 1L, 1L), (1, 1, 256, 128))
('6-convolutional/gamma:0', (128L,), (128,))
('6-convolutional/biases:0', (128L,), (128,))
('6-convolutional/moving_mean:0', (128L,), (128,))
('6-convolutional/moving_variance:0', (128L,), (128,))
('7-convolutional/kernel:0', (256L, 128L, 3L, 3L), (3, 3, 128, 256))
('7-convolutional/gamma:0', (256L,), (256,))
('7-convolutional/biases:0', (256L,), (256,))
('7-convolutional/moving_mean:0', (256L,), (256,))
('7-convolutional/moving_variance:0', (256L,), (256,))
('8-convolutional/kernel:0', (512L, 256L, 3L, 3L), (3, 3, 256, 512))
('8-convolutional/gamma:0', (512L,), (512,))
('8-convolutional/biases:0', (512L,), (512,))
('8-convolutional/moving_mean:0', (512L,), (512,))
('8-convolutional/moving_variance:0', (512L,), (512,))
('9-convolutional/kernel:0', (256L, 512L, 1L, 1L), (1, 1, 512, 256))
('9-convolutional/gamma:0', (256L,), (256,))
('9-convolutional/biases:0', (256L,), (256,))
('9-convolutional/moving_mean:0', (256L,), (256,))
('9-convolutional/moving_variance:0', (256L,), (256,))
('10-convolutional/kernel:0', (512L, 256L, 3L, 3L), (3, 3, 256, 512))
('10-convolutional/gamma:0', (512L,), (512,))
('10-convolutional/biases:0', (512L,), (512,))
('10-convolutional/moving_mean:0', (512L,), (512,))
('10-convolutional/moving_variance:0', (512L,), (512,))
('11-convolutional/kernel:0', (256L, 512L, 1L, 1L), (1, 1, 512, 256))
('11-convolutional/gamma:0', (256L,), (256,))
('11-convolutional/biases:0', (256L,), (256,))
('11-convolutional/moving_mean:0', (256L,), (256,))
('11-convolutional/moving_variance:0', (256L,), (256,))
('12-convolutional/kernel:0', (512L, 256L, 3L, 3L), (3, 3, 256, 512))
('12-convolutional/gamma:0', (512L,), (512,))
('12-convolutional/biases:0', (512L,), (512,))
('12-convolutional/moving_mean:0', (512L,), (512,))
('12-convolutional/moving_variance:0', (512L,), (512,))
('13-convolutional/kernel:0', (1024L, 512L, 3L, 3L), (3, 3, 512, 1024))
('13-convolutional/gamma:0', (1024L,), (1024,))
('13-convolutional/biases:0', (1024L,), (1024,))
('13-convolutional/moving_mean:0', (1024L,), (1024,))
('13-convolutional/moving_variance:0', (1024L,), (1024,))
('14-convolutional/kernel:0', (512L, 1024L, 1L, 1L), (1, 1, 1024, 512))
('14-convolutional/gamma:0', (512L,), (512,))
('14-convolutional/biases:0', (512L,), (512,))
('14-convolutional/moving_mean:0', (512L,), (512,))
('14-convolutional/moving_variance:0', (512L,), (512,))
('15-convolutional/kernel:0', (1024L, 512L, 3L, 3L), (3, 3, 512, 1024))
('15-convolutional/gamma:0', (1024L,), (1024,))
('15-convolutional/biases:0', (1024L,), (1024,))
('15-convolutional/moving_mean:0', (1024L,), (1024,))
('15-convolutional/moving_variance:0', (1024L,), (1024,))
('16-convolutional/kernel:0', (512L, 1024L, 1L, 1L), (1, 1, 1024, 512))
('16-convolutional/gamma:0', (512L,), (512,))
('16-convolutional/biases:0', (512L,), (512,))
('16-convolutional/moving_mean:0', (512L,), (512,))
('16-convolutional/moving_variance:0', (512L,), (512,))
('17-convolutional/kernel:0', (1024L, 512L, 3L, 3L), (3, 3, 512, 1024))
('17-convolutional/gamma:0', (1024L,), (1024,))
('17-convolutional/biases:0', (1024L,), (1024,))
('17-convolutional/moving_mean:0', (1024L,), (1024,))
('17-convolutional/moving_variance:0', (1024L,), (1024,))
load net succ...
epoch 0 start...
epoch: 0, step: 0, loss: 202.515, bbox_loss: 0.393, iou_loss: 201.162, cls_loss: 0.960 (2.10 s/batch)
epoch: 0, step: 10, loss: 11.230, bbox_loss: 0.704, iou_loss: 9.571, cls_loss: 0.955 (1.68 s/batch)
epoch: 0, step: 20, loss: 10.714, bbox_loss: 0.628, iou_loss: 9.156, cls_loss: 0.930 (1.69 s/batch)
epoch: 0, step: 30, loss: 11.044, bbox_loss: 1.107, iou_loss: 9.015, cls_loss: 0.922 (1.69 s/batch)
epoch: 0, step: 40, loss: 11.974, bbox_loss: 1.063, iou_loss: 9.978, cls_loss: 0.932 (1.69 s/batch)
epoch: 0, step: 50, loss: 12.925, bbox_loss: 2.811, iou_loss: 9.193, cls_loss: 0.921 (1.75 s/batch)
epoch: 0, step: 60, loss: 15.750, bbox_loss: 4.775, iou_loss: 10.045, cls_loss: 0.930 (1.60 s/batch)
epoch: 0, step: 70, loss: 16.090, bbox_loss: 7.294, iou_loss: 7.848, cls_loss: 0.948 (2.36 s/batch)
epoch: 0, step: 80, loss: 11.889, bbox_loss: 2.038, iou_loss: 8.919, cls_loss: 0.932 (2.43 s/batch)
epoch: 0, step: 90, loss: 14.579, bbox_loss: 4.674, iou_loss: 8.968, cls_loss: 0.936 (5.90 s/batch)
epoch: 0, step: 100, loss: 15.649, bbox_loss: 4.315, iou_loss: 10.388, cls_loss: 0.947 (9.22 s/batch)
epoch: 0, step: 110, loss: 37.384, bbox_loss: 26.982, iou_loss: 9.473, cls_loss: 0.928 (5.92 s/batch)
epoch: 0, step: 120, loss: 70.079, bbox_loss: 58.585, iou_loss: 10.556, cls_loss: 0.938 (2.78 s/batch)
epoch: 0, step: 130, loss: 15.072, bbox_loss: 5.575, iou_loss: 8.587, cls_loss: 0.910 (1.69 s/batch)
epoch: 0, step: 140, loss: 759.924, bbox_loss: 748.659, iou_loss: 10.314, cls_loss: 0.950 (1.70 s/batch)
epoch: 0, step: 150, loss: 822.332, bbox_loss: 810.596, iou_loss: 10.792, cls_loss: 0.945 (1.72 s/batch)
epoch: 0, step: 160, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.68 s/batch)
epoch: 0, step: 170, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.80 s/batch)
epoch: 0, step: 180, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.81 s/batch)
epoch: 0, step: 190, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 200, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 210, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 220, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 230, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (2.14 s/batch)
epoch: 0, step: 240, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (3.00 s/batch)
epoch: 0, step: 250, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 260, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 270, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 280, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.74 s/batch)
epoch: 0, step: 290, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 300, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 310, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.70 s/batch)
epoch: 0, step: 320, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 330, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 340, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 350, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 360, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 370, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.80 s/batch)
epoch: 0, step: 380, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 390, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 400, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 410, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 420, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 430, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 440, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 450, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (2.51 s/batch)
epoch: 0, step: 460, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.70 s/batch)
epoch: 0, step: 470, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 480, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 490, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 500, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 510, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 520, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 530, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 540, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 550, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 560, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 570, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 580, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 590, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 600, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 610, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 620, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.92 s/batch)
epoch: 0, step: 630, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.68 s/batch)
epoch: 0, step: 640, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 650, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 660, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 670, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 680, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 690, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 700, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 710, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 720, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 730, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.93 s/batch)
epoch: 0, step: 740, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 750, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 760, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.70 s/batch)
epoch: 0, step: 770, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (2.08 s/batch)
epoch: 0, step: 780, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 790, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 800, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (2.09 s/batch)
epoch: 0, step: 810, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 820, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 830, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 840, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 850, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.75 s/batch)
epoch: 0, step: 860, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 870, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 880, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 890, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.92 s/batch)
epoch: 0, step: 900, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 910, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 920, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 930, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 940, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 950, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 960, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 970, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.87 s/batch)
epoch: 0, step: 980, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 990, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1000, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1010, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1020, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1030, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1040, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1050, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1060, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1070, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
epoch: 0, step: 1080, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.76 s/batch)
epoch: 0, step: 1090, loss: nan, bbox_loss: nan, iou_loss: nan, cls_loss: nan (1.69 s/batch)
THCudaCheck FAIL file=/b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorCopy.c line=18 error=4 : unspecified launch failure
Traceback (most recent call last):
  File "/home/hal9000/Sources/yolo2-pytorch/train.py", line 74, in <module>
    im_data = net_utils.np_to_variable(im, is_cuda=True, volatile=False).permute(0, 3, 1, 2)
  File "/home/hal9000/Sources/yolo2-pytorch/utils/network.py", line 102, in np_to_variable
    v = v.cuda()
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 240, in cuda
    return CudaTransfer(device_id, async)(self)
  File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/tensor.py", line 160, in forward
    return i.cuda(async=self.async)
  File "/usr/local/lib/python2.7/dist-packages/torch/_utils.py", line 65, in _cuda
    return new_type(self.size()).copy_(self, async)
RuntimeError: cuda runtime error (4) : unspecified launch failure at /b/wheel/pytorch-src/torch/lib/THC/generic/THCTensorCopy.c:18

Process finished with exit code 1

ModuleNotFoundError: No module named 'layers.reorg._ext.reorg_layer._reorg_layer'

Hi all:
There is an error when I run: python demo.py

(myenv) lijins-MacBook-Pro:yolo2-pytorch-master lijin$ python demo.py
Traceback (most recent call last):
File "demo.py", line 6, in
from darknet import Darknet19
File "/Users/lijin/Documents/GitHub/RL_tracking/yolo2-pytorch-master/darknet.py", line 8, in
from layers.reorg.reorg_layer import ReorgLayer
File "/Users/lijin/Documents/GitHub/RL_tracking/yolo2-pytorch-master/layers/reorg/reorg_layer.py", line 3, in
from ._ext import reorg_layer
File "/Users/lijin/Documents/GitHub/RL_tracking/yolo2-pytorch-master/layers/reorg/_ext/reorg_layer/init.py", line 3, in
from ._reorg_layer import lib as _lib, ffi as _ffi
ModuleNotFoundError: No module named 'layers.reorg._ext.reorg_layer._reorg_layer'

Could anyone tell me what is wrong with this error and how to deal with it? Thanks in advance!

No module named _reorg_layer

yolo2-pytorch/layers/reorg/_ext/reorg_layer/__init__.py", line 3, in <module>
from ._reorg_layer import lib as _lib, ffi as _ffi
ImportError: No module named _reorg_layer

Undefined Symbol: PyInt_FromLong

I've written a demo program and have been trying to run this in python3 but keep getting this error:

ImportError: /home/ubuntu/git/yolo2-pytorch/layers/reorg/_ext/reorg_layer/_reorg_layer.so: undefined symbol: PyInt_FromLong

I'm assuming it was written in python2, but is there any way to convert this? I've tried but haven't gotten anything to work.

ImportError on _reorg_layer.so

Hi,

Can't seem to resolve another issue on attempting to run demo.py :

from ._reorg_layer import lib as _lib, ffi as _ffi ImportError: dynamic module does not define init function (init_reorg_layer)

Seems to be thrown on importing the _reorg_layer.so in the /_ext/reorg_layer directory

Using Python 2.7 and Pytorch 0.1.10-py27_1cu80

Also, I'm using Pytorch from within an anaconda environment. Could that be causing problems?

cffi.error.VerificationError: LinkError: command 'gcc' failed with exit status 1

Hi all:
When I try to execute: ./make.sh
Then there is an error:
Traceback (most recent call last):
File "build.py", line 34, in
ffi.build()
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 167, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 103, in _build_extension
ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/cffi/api.py", line 690, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/cffi/recompiler.py", line 1515, in recompile
compiler_verbose, debug)
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/Users/lijin/miniconda3/envs/myenv/lib/python3.6/site-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: LinkError: command 'gcc' failed with exit status 1

Could anyone tell me what is wrong with this? And could you also please tell me how to deal with this error? Thanks in advance!

network output 0 and inf

i implemented your code with tensorflow (using pretrained vgg instead of darknet), after training with 100 epochs with VOC2007 (cls_loss use MSE, softmax), its output are all zeros or inf, what problems happened? thanks!

Normalize inputs

Should we be normalizing the inputs prior to feeding them into the network [1]? All of the PyTorch networks contained in the model zoo use the same mean and std deviation for preprocessing images:

The images have to be loaded in to a range of [0, 1] and then normalized using mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.