Code Monkey home page Code Monkey logo

fpn.pytorch's Introduction

fpn.pytorch Pytorch implementation of Feature Pyramid Network (FPN) for Object Detection

Introduction

This project inherits the property of our pytorch implementation of faster r-cnn. Hence, it also has the following unique features:

  • It is pure Pytorch code. We convert all the numpy implementations to pytorch.

  • It supports trainig batchsize > 1. We revise all the layers, including dataloader, rpn, roi-pooling, etc., to train with multiple images at each iteration.

  • It supports multiple GPUs. We use a multiple GPU wrapper (nn.DataParallel here) to make it flexible to use one or more GPUs, as a merit of the above two features.

  • It supports three pooling methods. We integrate three pooling methods: roi pooing, roi align and roi crop. Besides, we convert them to support multi-image batch training.

Benchmarking

We benchmark our code thoroughly on three datasets: pascal voc, coco. Below are the results:

1). PASCAL VOC 2007 (Train/Test: 07trainval/07test, scale=600, ROI Align)

model GPUs Batch Size lr lr_decay max_epoch Speed/epoch Memory/GPU mAP
Res-101   8 TitanX 24 1e-2 10 12 0.22 hr 9688MB 74.2

Results on coco are on the way.

fpn.pytorch's People

Contributors

jwyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fpn.pytorch's Issues

Definition of ResNet

In resnet.py (line 117 self.layer4 = self._make_layer(block, 512, layers[3], stride=1)), stride is equal to 1. This results in that the size of p4 is equal to the size of p5. Is there any reason about this setting?

about rpn_loss_box is zero

i found rpn_loss_box has always been zero, and that because bbox_outside_weights is 0.
maybe i should change
positive_weights = 1.0 / num_examples
negative_weights = 1.0 / num_examples
to
positive_weights = 1.0 / num_examples.item()
negative_weights = 1.0 / num_examples.item()
in anchor_target_layer_fpn.py line 136

fpn.pytorch/lib/model/nms/src/nms_cuda.c: No such file or directory

When running make.sh, I'm getting a no such file or directory error for the file nms_cuda.c. This
appears to be missing from the lib/model/nms/src directory.

running build_ext
building '_nms' extension
creating home
creating home/thomasbalestri
creating home/thomasbalestri/PycharmProjects
creating home/thomasbalestri/PycharmProjects/fpn.pytorch
creating home/thomasbalestri/PycharmProjects/fpn.pytorch/lib
creating home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model
creating home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model/nms
creating home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model/nms/src
gcc -pthread -B /home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/compiler_compat -Wl,--sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/include/python2.7 -c _nms.c -o ./_nms.o
gcc -pthread -B /home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/compiler_compat -Wl,--sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/../../lib/include/THC -I/usr/local/cuda/include -I/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/include/python2.7 -c /home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model/nms/src/nms_cuda.c -o ./home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model/nms/src/nms_cuda.o
gcc: error: /home/thomasbalestri/PycharmProjects/fpn.pytorch/lib/model/nms/src/nms_cuda.c: No such file or directory
Traceback (most recent call last):
  File "build.py", line 36, in <module>
    ffi.build()
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/__init__.py", line 167, in build
    _build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/torch/utils/ffi/__init__.py", line 103, in _build_extension
    ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/cffi/api.py", line 690, in compile
    compiler_verbose=verbose, debug=debug, **kwds)
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/cffi/recompiler.py", line 1513, in recompile
    compiler_verbose, debug)
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/cffi/ffiplatform.py", line 22, in compile
    outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
  File "/home/thomasbalestri/anaconda3/envs/pytorch0.3.0_py2/lib/python2.7/site-packages/cffi/ffiplatform.py", line 58, in _build
    raise VerificationError('%s: %s' % (e.__class__.__name__, e))
cffi.error.VerificationError: CompileError: command 'gcc' failed with exit status 1

TypeError: 'list' object is not callable

/home/scw4750/anaconda3/envs/pyt0.4/bin/python /media/scw4750/个人文件/liujiajia/fpn.pytorch-master/trainval_net.py
Called with args:
Namespace(batch_size=1, checkepoch=1, checkpoint=0, checkpoint_interval=10000, checksession=1, class_agnostic=False, cuda='True', dataset='pascal_voc', disp_interval=100, lr=0.001, lr_decay_gamma=0.1, lr_decay_step=5, lscale=False, mGPUs='2', max_epochs=20, net='res101', num_workers=0, optimizer='sgd', resume=False, save_dir='/liujiajia/models', session=1, start_epoch=1, use_tfboard=False)
Using config:
{'ANCHOR_RATIOS': [0.5, 1, 2],
'ANCHOR_SCALES': [8, 16, 32],
'CROP_RESIZE_WITH_MAX_POOL': False,
'CUDA': False,
'DATA_DIR': '/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/data',
'DEDUP_BOXES': 0.0625,
'EPS': 1e-14,
'EXP_DIR': 'res101',
'FEAT_STRIDE': [16],
'FPN_ANCHOR_SCALES': [32, 64, 128, 256, 512],
'FPN_ANCHOR_STRIDE': 1,
'FPN_FEAT_STRIDES': [4, 8, 16, 32, 64],
'GPU_ID': '0,1',
'HAS_MASK': True,
'MATLAB': 'matlab',
'MAX_NUM_GT_BOXES': 20,
'MOBILENET': {'DEPTH_MULTIPLIER': 1.0,
'FIXED_LAYERS': 5,
'REGU_DEPTH': False,
'WEIGHT_DECAY': 4e-05},
'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
'POOLING_MODE': 'align',
'POOLING_SIZE': 7,
'RESNET': {'FIXED_BLOCKS': 1, 'MAX_POOL': False},
'RNG_SEED': 3,
'ROOT_DIR': '/media/scw4750/个人文件/liujiajia/fpn.pytorch-master',
'TEST': {'BBOX_REG': True,
'HAS_RPN': True,
'MAX_SIZE': 1000,
'MODE': 'nms',
'NMS': 0.3,
'PROPOSAL_METHOD': 'gt',
'RPN_MIN_SIZE': 16,
'RPN_NMS_THRESH': 0.7,
'RPN_POST_NMS_TOP_N': 300,
'RPN_PRE_NMS_TOP_N': 6000,
'RPN_TOP_N': 5000,
'SCALES': [600],
'SVM': False},
'TRAIN': {'ASPECT_GROUPING': False,
'BATCH_SIZE': 128,
'BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'BBOX_NORMALIZE_MEANS': [0.0, 0.0, 0.0, 0.0],
'BBOX_NORMALIZE_STDS': [0.1, 0.1, 0.2, 0.2],
'BBOX_NORMALIZE_TARGETS': True,
'BBOX_NORMALIZE_TARGETS_PRECOMPUTED': True,
'BBOX_REG': True,
'BBOX_THRESH': 0.5,
'BG_THRESH_HI': 0.5,
'BG_THRESH_LO': 0.0,
'BIAS_DECAY': False,
'BN_TRAIN': False,
'DISPLAY': 20,
'DOUBLE_BIAS': False,
'FG_FRACTION': 0.25,
'FG_THRESH': 0.5,
'GAMMA': 0.1,
'HAS_RPN': True,
'IMS_PER_BATCH': 1,
'LEARNING_RATE': 0.001,
'MAX_SIZE': 1000,
'MOMENTUM': 0.9,
'PROPOSAL_METHOD': 'gt',
'RPN_BATCHSIZE': 256,
'RPN_BBOX_INSIDE_WEIGHTS': [1.0, 1.0, 1.0, 1.0],
'RPN_CLOBBER_POSITIVES': False,
'RPN_FG_FRACTION': 0.5,
'RPN_MIN_SIZE': 8,
'RPN_NEGATIVE_OVERLAP': 0.3,
'RPN_NMS_THRESH': 0.7,
'RPN_POSITIVE_OVERLAP': 0.7,
'RPN_POSITIVE_WEIGHT': -1.0,
'RPN_POST_NMS_TOP_N': 2000,
'RPN_PRE_NMS_TOP_N': 12000,
'SCALES': [600],
'SNAPSHOT_ITERS': 5000,
'SNAPSHOT_KEPT': 3,
'SNAPSHOT_PREFIX': 'res101_faster_rcnn',
'STEPSIZE': [30000],
'SUMMARY_INTERVAL': 180,
'TRIM_HEIGHT': 600,
'TRIM_WIDTH': 600,
'TRUNCATED': False,
'USE_ALL_GT': True,
'USE_FLIPPED': True,
'USE_GT': False,
'WEIGHT_DECAY': 0.0001},
'USE_GPU_NMS': True}
voc_2007_trainval gt roidb loaded from /media/scw4750/个人文件/liujiajia/fpn.pytorch-master/data/cache/voc_2007_trainval_gt_roidb.pkl
Loaded dataset voc_2007_trainval for training
voc_2007_trainval gt roidb loaded from /media/scw4750/个人文件/liujiajia/fpn.pytorch-master/data/cache/voc_2007_trainval_gt_roidb.pkl
Set proposal method: gt
Appending horizontally-flipped training examples...
Traceback (most recent call last):
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/trainval_net.py", line 219, in
imdb, roidb, ratio_list, ratio_index = combined_roidb(args.imdb_name)
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/roi_data_layer/roidb.py", line 116, in combined_roidb
roidbs = [get_roidb(s) for s in imdb_names.split('+')]
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/roi_data_layer/roidb.py", line 116, in
roidbs = [get_roidb(s) for s in imdb_names.split('+')]
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/roi_data_layer/roidb.py", line 113, in get_roidb
roidb = get_training_roidb(imdb)
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/roi_data_layer/roidb.py", line 97, in get_training_roidb
imdb.append_flipped_images()
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/datasets/imdb.py", line 118, in append_flipped_images
boxes = self.roidb[i]['boxes'].copy()
File "/media/scw4750/个人文件/liujiajia/fpn.pytorch-master/lib/datasets/imdb.py", line 76, in roidb
self._roidb = self.roidb_handler()
TypeError: 'list' object is not callable

Process finished with exit code 1

How to solve? Thank you very much! My version is Pytorch 0.4 ,Python 3.6

RCNN_roi_align ERROR when training

when I train the FPN network on my own dataset for several steps, it goes into the following error.
Traceback (most recent call last): File "trainval_net.py", line 335, in <module> roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes) File "/home/xiaolin/xlzhang/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/home/xiaolin/xlzhang/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 112, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/xiaolin/xlzhang/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/data/xlzhang/fpn.pytorch/lib/model/fpn/fpn.py", line 236, in forward roi_pool_feat = self._PyramidRoI_Feat(mrcnn_feature_maps, rois, im_info) File "/data/xlzhang/fpn.pytorch/lib/model/fpn/fpn.py", line 134, in _PyramidRoI_Feat feat = self.RCNN_roi_align(feat_maps[i], rois[idx_l], scale) File "/home/xiaolin/xlzhang/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__ result = self.forward(*input, **kwargs) File "/data/xlzhang/fpn.pytorch/lib/model/roi_align/modules/roi_align.py", line 28, in forward scale)(features, rois) File "/data/xlzhang/fpn.pytorch/lib/model/roi_align/functions/roi_align.py", line 27, in forward rois, output) File "/home/xiaolin/xlzhang/anaconda2/lib/python2.7/site-packages/torch/utils/ffi/__init__.py", line 197, in safe_call result = torch._C._safe_call(*args, **kwargs) torch.FatalError: invalid argument 2: out of range at /opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/THC/generic/THCTensor.c:23
Can anyone help me with this?

Thank you!

inconsistant anchor reference (may be the cause of the lower accuracy than faster-rcnn)

Hi,

Please notice you have an inconsistant reference to the order of the anchors (lines 86 and 79)

rpn_cls_scores.append(rpn_cls_score.permute(0, 2, 3, 1).contiguous().view(batch_size, -1, 2))

rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape)

lets say k is the number of anchors

than in line 79 you are doing softmax after reshape which make anchors:
0:k-1 assosiated with proposal = false
k:2k-1 assosiated with proposal = True.

and in line 86 your anchors are arranged in a different way:
i mod 2 == 0 associated with proposal = false
i mod 2 == 1 associated with proposal = True.

I propose to reshape the score in this way to be consistant.
(You will also need to change few other things in proposalLayer for it to work)

rpn_cls_scores.append(rpn_cls_score_reshape.permute(0, 2, 3, 1).contiguous().view(batch_size, -1, 2))

irq/133-nvidia error

per time stop train,must be show a new process "irq/133-nvidia",cpu 100%,

nvidia net show :
systemctl daemon-reload
systemctl enable nvidia-persistenced
systemctl start nvidia-persistenced

sh make.sh error

@jwyang
I use python2.7(anaconda) and CUDA9.0 and sm_52(TITAN Xp)
when I run sh make.sh, error occurs:

/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/nms/src/nms_cuda.c: In function ‘nms_cuda’:
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/nms/src/nms_cuda.c:14:22: error: dereferencing pointer to incomplete type                                              boxes_host->size[0],                                                                                                                                                                           ^                                                                                                                                                              /mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/nms/src/nms_cuda.c:15:22: error: dereferencing pointer to incomplete type

......

/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c: In function ‘BilinearSamplerBHWD_updateOutput’:
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:10:30: error: dereferencing pointer to incomplete type
   int batchsize = inputImages->size[0];
                              ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:11:39: error: dereferencing pointer to incomplete type
   int inputImages_height = inputImages->size[1];
                                       ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:12:38: error: dereferencing pointer to incomplete type
   int inputImages_width = inputImages->size[2];
                                      ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:13:29: error: dereferencing pointer to incomplete type
   int output_height = output->size[1];
                             ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:14:28: error: dereferencing pointer to incomplete type
   int output_width = output->size[2];
                            ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:15:41: error: dereferencing pointer to incomplete type
   int inputImages_channels = inputImages->size[3];
                                         ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:17:34: error: dereferencing pointer to incomplete type
   int output_strideBatch = output->stride[0];
                                  ^
/mnt/lustre/hezhiqun/experiments/detection/cascade-rcnn_Pytorch/lib/model/roi_crop/src/roi_crop.c:18:35: error: dereferencing pointer to incomplete type
......

demo.py is not updated for fpn

Hi @jwyang,

I don't think the demo.py file is updated for FPN. Do you have an updated version which you can push? Please do that if you do, it will help a lot.

Thank you very much

how about the coco results?

Hi, thanks for your implementation. You mentioned that coco results are on the way, have you got it yet? And just curious, are you planning to extend this FPN net to mask-rcnn?

Specify dim in softmax function

If you are using pytorch 0.4, pls specify dim when softmax function is called in fpn.py and rpn_fpn.py, otherwise you will probably get mAP=0 for every category.

rcnn_box loss equals to 0 when training

@jwyang Hello, I trained fpn with resnext backbone on my own datasets and I get zero rcnn_box loss during the training, I wonder if there is something wrong ? meanwhile, total loss is fluctuating around 0.2+
below are some output :

[session 1][epoch  1][iter 20200] loss: 0.2516, lr: 1.00e-03
			fg/bg=(2/254), time cost: 75.063063
			rpn_cls: 0.0987, rpn_box: 0.0070, rcnn_cls: 0.0789, rcnn_box 0.0000
[session 1][epoch  1][iter 20300] loss: 0.2348, lr: 1.00e-03
			fg/bg=(4/252), time cost: 71.644270
			rpn_cls: 0.1217, rpn_box: 0.0084, rcnn_cls: 0.1442, rcnn_box 0.0000
[session 1][epoch  1][iter 20400] loss: 0.2335, lr: 1.00e-03
			fg/bg=(2/254), time cost: 67.127675
			rpn_cls: 0.0929, rpn_box: 0.0019, rcnn_cls: 0.0792, rcnn_box 0.0000
[session 1][epoch  1][iter 20500] loss: 0.2261, lr: 1.00e-03
			fg/bg=(3/253), time cost: 66.649418
			rpn_cls: 0.1408, rpn_box: 0.0142, rcnn_cls: 0.1100, rcnn_box 0.0000
[session 1][epoch  1][iter 20600] loss: 0.2505, lr: 1.00e-03
			fg/bg=(4/252), time cost: 67.386812
			rpn_cls: 0.1469, rpn_box: 0.0233, rcnn_cls: 0.1441, rcnn_box 0.0000
[session 1][epoch  1][iter 20700] loss: 0.2449, lr: 1.00e-03
			fg/bg=(4/252), time cost: 67.287636
			rpn_cls: 0.1688, rpn_box: 0.0152, rcnn_cls: 0.1434, rcnn_box 0.0000
[session 1][epoch  1][iter 20800] loss: 0.2117, lr: 1.00e-03
			fg/bg=(2/254), time cost: 66.637634
			rpn_cls: 0.0515, rpn_box: 0.0037, rcnn_cls: 0.0790, rcnn_box 0.0000
[session 1][epoch  1][iter 20900] loss: 0.2153, lr: 1.00e-03
			fg/bg=(2/254), time cost: 68.067170
			rpn_cls: 0.0550, rpn_box: 0.0020, rcnn_cls: 0.0788, rcnn_box 0.0000
[session 1][epoch  1][iter 21000] loss: 0.2342, lr: 1.00e-03
			fg/bg=(4/252), time cost: 67.029715
			rpn_cls: 0.1416, rpn_box: 0.0063, rcnn_cls: 0.1458, rcnn_box 0.0000
[session 1][epoch  1][iter 21100] loss: 0.2384, lr: 1.00e-03
			fg/bg=(4/252), time cost: 67.087833
			rpn_cls: 0.1267, rpn_box: 0.0053, rcnn_cls: 0.1444, rcnn_box 0.0000

when i porting fpn pytorch version 0.4 to 1.0, i face with error

    if cfg.POOLING_MODE == "crop":

        grid_xy = _affine_grid_gen(rois, base_feat.size()[2:], self.grid_size)
        grid_yx = torch.stack([grid_xy.data[:,:,:,1], grid_xy.data[:,:,:,0]], 3).contiguous()
        roi_pool_feat = self.RCNN_roi_crop(base_feat, Variable(grid_yx).detach())
        if cfg.CROP_RESIZE_WITH_MAX_POOL:
            roi_pool_feat = F.max_pool2d(roi_pool_feat, 2, 2)
    else :
        roi_pool_feats = []
        box_to_levels = []
        for i, l in enumerate(range(2, 6)):
            if (roi_level == l).sum() == 0:
                continue
            idx_l = (roi_level == l).nonzero().squeeze()
            box_to_levels.append(idx_l)
            #scale = feat_maps[i].size(2) / im_info[0][0]
            if cfg.POOLING_MODE == 'align':
                feat = self.RCNN_roi_align(feat_maps[i], rois[idx_l])
            elif cfg.POOLING_MODE == 'pool':
                feat = self.RCNN_roi_pool(feat_maps[i], rois[idx_l])
            roi_pool_feats.append(feat)
 
     
        roi_pool_feat = torch.cat(roi_pool_feats, 0)       
        box_to_level = torch.cat(box_to_levels, 0)
        idx_sorted, order = torch.sort(box_to_level)
        roi_pool_feat = roi_pool_feat[order]
        
    return roi_pool_feat

in _PyramidRoI_Feat function of fpn.py
when i porting test_net code to change torch version 0.4 to 1,
occured "runtime error : zero-dimensional tensor(at position 3) cannot be concatenated " because 724 box_to_levle[3]'s dimesion is 1
error data shape : tensor(271,device='cuda:0')
other data shape : tensor([3, 10, 13, 14, .... 282],device='cuda:0')

i change some code

  1. test_net.py ( line : 277 )
    before : cls_dets = torch.cat(cls_boxes, cls_scores, 1)
    after : cls_dets = torch.cat((cls_boxes, cls_scores.unsqueeze(1)), 1)
  2. test_net.py ( line : 279 )
    before : keep = nms(cls_dets, cfg.TEST.NMS)
    after : keep = nms(cls_boxes[order, :], cls_scores[order], cfg.TEST.NMS)

my enviorment :
based project is your faster - rcnn , which support version torch 1.0
anaconda pytorch1.0, cuda version is 10
python version is 2.7
dataset is voc2007 ( occured error in 001433 data, test data 724th of vocdata2007)

i wondering where i shold debug. i should not get a sense.

Got 80.5 mAP on Pascal VOC 2007

hi guys, i changed the stride in resnet.layer4 from 1 to 2, then i trained the model on
the union set of VOC 2007 trainval and VOC 2012 trainval (“07+12”) and evaluate on
VOC 2007 test set, i got 80.5 mAP. Besides, i trained the model on VOC 2007 trainval and evaluate on
VOC 2007 test set, i got 75.7 mAP. You can see the details in the repository

longtensor and reciprocal

Loading pretrained weights from data/pretrained_model/resnet101_caffe.pth
/home/ztd/anaconda2/envs/py2.7_tf1.4_tff/lib/python2.7/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/test/frcnn_pytorch/fpn_torch_10_16/lib/model/rpn/rpn_fpn.py:79: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape)
Traceback (most recent call last):
File "/test/frcnn_pytorch/fpn_torch_10_16/trainval_net.py", line 330, in
roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes)
File "/home/ztd/anaconda2/envs/py2.7_tf1.4_tff/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/test/frcnn_pytorch/fpn_torch_10_16/lib/model/fpn/fpn.py", line 187, in forward
rois, rpn_loss_cls, rpn_loss_bbox = self.RCNN_rpn(rpn_feature_maps, im_info, gt_boxes, num_boxes)
File "/home/ztd/anaconda2/envs/py2.7_tf1.4_tff/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/test/frcnn_pytorch/fpn_torch_10_16/lib/model/rpn/rpn_fpn.py", line 109, in forward
rpn_data = self.RPN_anchor_target((rpn_cls_score_alls.data, gt_boxes, im_info, num_boxes, rpn_shapes))
File "/home/ztd/anaconda2/envs/py2.7_tf1.4_tff/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/test/frcnn_pytorch/fpn_torch_10_16/lib/model/rpn/anchor_target_layer_fpn.py", line 137, in forward
positive_weights = 1.0 / num_examples
File "/home/ztd/anaconda2/envs/py2.7_tf1.4_tff/lib/python2.7/site-packages/torch/tensor.py", line 320, in rdiv
return self.reciprocal() * other
RuntimeError: reciprocal is not implemented for type torch.cuda.LongTensor

Process finished with exit code 1

I dont know how to solve this

GPU Memory issues about the ROI Pooling lib

Hi! Many thanks to your wonderful codes. I separately move the lib of roi pooling part to some of my own projects and it runs correctly after compiling. However, when I ran my code equipped with this lib, I found the GPU memory is increasing from 2G to over 8G and then CUDA throws 'run out of memory' error. I think this might be the different roi numbers for each iteration and the mechanism of dynamic graphs in pytorch. Since the roi number differs each iteration, there is always new generated dynamic graph, thus the GPU occupation always goes on. I hope you could provide some suggestions to revise the code and keep the GPU memory the same while running, thanks!

Training suddenly terminates with run time error, please help

[session 1][epoch 1][iter 2100] loss: 1.2515, lr: 1.00e-03
fg/bg=(32/96), time cost: 46.959169
rpn_cls: 0.0647, rpn_box: 0.0156, rcnn_cls: 0.7545, rcnn_box 0.4920
[session 1][epoch 1][iter 2200] loss: 1.3776, lr: 1.00e-03
fg/bg=(32/96), time cost: 46.760157
rpn_cls: 0.2410, rpn_box: 0.1341, rcnn_cls: 0.7460, rcnn_box 0.3669
Traceback (most recent call last):
File "trainval_net.py", line 330, in
roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes)
File "/home/k21993/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in call
result = self.forward(*input, **kwargs)
File "/home/k21993/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 60, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/k21993/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 70, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/k21993/anaconda3/envs/python27/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
RuntimeError: invalid argument 3: expecting vector of indices at /opt/conda/conda-bld/pytorch_1503966894950/work/torch/lib/THC/generic/THCTensorIndex.cu:4

When I set the pooling mode as 'pool', an error arise

/home/zhuyuhe/.conda/envs/dwzpy/lib/python3.5/site-packages/torch/nn/functional.py:1749: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
/home/zhuyuhe/duwangzhe/fpn_pytorch/lib/model/rpn/rpn_fpn.py:79: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
rpn_cls_prob_reshape = F.softmax(rpn_cls_score_reshape)
Traceback (most recent call last):
File "trainval_net.py", line 332, in
roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes)
File "/home/zhuyuhe/.conda/envs/dwzpy/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/zhuyuhe/duwangzhe/fpn_pytorch/lib/model/fpn/fpn.py", line 246, in forward
roi_pool_feat = self._PyramidRoI_Feat(mrcnn_feature_maps, rois, im_info)
File "/home/zhuyuhe/duwangzhe/fpn_pytorch/lib/model/fpn/fpn.py", line 160, in _PyramidRoI_Feat
feat = self.RCNN_roi_pool(feat_maps[i], rois[idx_l], scale)
File "/home/zhuyuhe/.conda/envs/dwzpy/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/zhuyuhe/duwangzhe/fpn_pytorch/lib/model/roi_pooling/modules/roi_pool.py", line 14, in forward
return RoIPoolFunction(self.pooled_height, self.pooled_width, scale)(features, rois)
File "/home/zhuyuhe/duwangzhe/fpn_pytorch/lib/model/roi_pooling/functions/roi_pool.py", line 26, in forward
features, rois, output, ctx.argmax)
File "/home/zhuyuhe/.conda/envs/dwzpy/lib/python3.5/site-packages/torch/utils/ffi/init.py", line 197, in safe_call
result = torch._C._safe_call(*args, **kwargs)
TypeError: float() not supported on cdata 'void *'
Anyone can help me? Thx!

train with multi gpus can not work

Hi, I use multi gpus to train but not work.

Add option --cuda --mGPUs, I am sure that there are multi gpus to be available.

Can you help me ?

roi pooling

grid_xy = _affine_grid_gen(rois, base_feat.size()[2:], self.grid_size)

NameError: global name 'base_feat' is not defined
how to solve this problem?Is there someone meet
this problem?Hope your request,thanks!

collapses after the trainval_net.py finishing its first training iter

I am training my own dataset using fpn, i set the num_workers > 0 like =4, and then it collapses after the first training iter, but it will be ok if num_workers = 0, my device is following:
OS: Ubuntu 16.04
PyTorch version: pytorch 0.3.1
Python version: python 2.7
CUDA version: 8.0
GPU models : Tesla P40

[session 3][epoch  1][iter    0] loss: 4.3830, lr: 1.00e-03
			fg/bg=(115/397), time cost: 5.939724
			rpn_cls: 0.6945, rpn_box: 1.2795, rcnn_cls: 2.3964, rcnn_box 0.0127
Traceback (most recent call last):
  File "trainval_net.py", line 339, in <module>
    data = data_iter.next()
  File "/root/anaconda2/lib/python2.7/site-packages/torch/utils/data/dataloader.py", line 204, in __next__
    idx, batch = self.data_queue.get()
  File "/root/anaconda2/lib/python2.7/multiprocessing/queues.py", line 378, in get
    return recv()
  File "/root/anaconda2/lib/python2.7/site-packages/torch/multiprocessing/queue.py", line 22, in recv
    return pickle.loads(buf)
  File "/root/anaconda2/lib/python2.7/pickle.py", line 1388, in loads
    return Unpickler(file).load()
  File "/root/anaconda2/lib/python2.7/pickle.py", line 864, in load
    dispatch[key](self)
  File "/root/anaconda2/lib/python2.7/pickle.py", line 1139, in load_reduce
    value = func(*args)
  File "/root/anaconda2/lib/python2.7/site-packages/torch/multiprocessing/reductions.py", line 68, in rebuild_storage_fd
    fd = multiprocessing.reduction.rebuild_handle(df)
  File "/root/anaconda2/lib/python2.7/multiprocessing/reduction.py", line 155, in rebuild_handle
    conn = Client(address, authkey=current_process().authkey)
  File "/root/anaconda2/lib/python2.7/multiprocessing/connection.py", line 169, in Client
    c = SocketClient(address)
  File "/root/anaconda2/lib/python2.7/multiprocessing/connection.py", line 308, in SocketClient
    s.connect(address)
  File "/root/anaconda2/lib/python2.7/socket.py", line 228, in meth
    return getattr(self._sock,name)(*args)
socket.error: [Errno 111] Connection refused

i saw the same problem pytorch/pytorch#1355, but his python verison is 3.x, and i tried that solution, it does not works. @jwyang

bbox_outside_weights normalized incorrectly (anchor_target_layer_fpn.py:136)

the positive and negative weights are normalized by num_examples in anchor_target_layer_fpn.py. num_examples is calculated based on index i outside of i's batch loop, thus the only number of examples that matter is the last batch, all weights for every example in the batch will be normalized by the last example in the batch, rather than on a example by example basis OR by a num_example that considers the entire batch.

    for i in range(batch_size):
      ... ...
    offset = torch.arange(0, batch_size)*gt_boxes.size(1)

    argmax_overlaps = argmax_overlaps + offset.view(batch_size, 1).type_as(argmax_overlaps)
    bbox_targets = _compute_targets_batch(anchors, gt_boxes.view(-1,5 [argmax_overlaps.view(-1), :].view(batch_size, -1, 5))

    # use a single value instead of 4 values for easy index.
    bbox_inside_weights[labels==1] = cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS[0]

    if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:
        #note that i = batch_size-1
        num_examples = torch.sum(labels[i] >= 0)
        positive_weights = 1.0 / num_examples
        negative_weights = 1.0 / num_examples
    else:
        assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) &
                (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))

    bbox_outside_weights[labels == 1] = positive_weights
    bbox_outside_weights[labels == 0] = negative_weights

COCO Results

Hi, thanks for your implementation. Have you got the results from COCO yet? And I'm still wondering why its performance is worse than your faster-rcnn implementation?

Your code doesn't match your comment.

In your code, you note that "the original paper used pool_size = 7 for cls branch, and 14 for mask branch, to save the computation time, we first use 14 as the pool_size, and then do stride=2 pooling for cls branch." (37th line in fpn.py file).
But according to your default configuration file (config.py in folder utils), the cfg.POOLING_SIZE is still 7, not 14.
And there's not a "stride=2 pooling function" in the forward function of your _FPN class, although I can find a function called RCNN_roi_feat_ds in resnet.py which might be created for it but is a convolutional function, not a pooling one and not in use.
Also there's a comment in 235th line of fpn.py, indicating that the size of the output should be 14 * 14. But actually it is 7 * 7.

change anchor size to detect extremely small objects

Hi, sorry to bother you again..
I want to change the anchor size to detect some extremely small objects.
So in the faster rcnn version, we can directly change the line
args.set_cfgs... in trainval_net.py
but when it comes to the fpn version
the line turns to
args.set_cfgs = ['FPN_ANCHOR_SCALES', '[32, 64, 128, 256, 512]', 'FPN_FEAT_STRIDES', '[4, 8, 16, 32, 64]',
then when I change the scales and strides, there is a problem about
IndexError: list index out of range
So does that mean I can only change the anchor size for RPN in config.py?

Thanks for your help

confused about the BN layer

Hi,thanks for your works.
The BN layer in pytorch is
class torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
in your code you choose the default parameters. Is that right for object detection with a small batchsize?

No such file or directory:log

Hi, I am encountered with a problem.

Here is the relevant feedback :

Called with args:
Namespace(batch_size=4, checkepoch=1, checkpoint=0, checkpoint_interval=10000, checksession=1, class_agnostic=False, cuda=True, dataset='pascal_voc', disp_interval=100, lr=0.01, lr_decay_gamma=0.1, lr_decay_step=10, lscale=False, mGPUs=True, max_epochs=20, net='res101', num_workers=0, optimizer='sgd', resume=False, save_dir='/DATACENTER2/qyj/fpn.pytorch-master/models', session=1, start_epoch=1, use_tfboard=False)
Traceback (most recent call last):
File "trainval_net.py", line 167, in
filemode='w', level=logging.DEBUG)
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/logging/init.py", line 1547, in basicConfig
hdlr = FileHandler(filename, mode)
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/logging/init.py", line 913, in init
StreamHandler.init(self, self._open())
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/logging/init.py", line 943, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 2] No such file or directory: '/DATACENTER2/qyj/fpn.pytorch-master/logs/res101_pascal_voc_1.log'

I found that in the code it has these lines:
if args.use_tfboard:
from model.utils.logger import Logger
# Set the logger
logger = Logger('./logs')

logging.basicConfig(filename="logs/"+args.net+""+args.dataset+""+str(args.session)+".log",
filemode='w', level=logging.DEBUG)
logging.info(str(datetime.now()))

I really appreciate for your help.

Training suddenly terminate after first epoch. Looking for help, plz

Here are my Trace backs:
[session 1][epoch 1][iter 0] loss: 4.0006, lr: 1.00e-02
fg/bg=(128/384), time cost: 7.218862
rpn_cls: 0.6919, rpn_box: 0.1386, rcnn_cls: 2.8319, rcnn_box 0.3382
Traceback (most recent call last):
File "trainval_net.py", line 330, in
roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes)
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 73, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 83, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/home/zhiqi.cheng/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply
raise output
RuntimeError: invalid argument 2: Input tensor must have same size as output tensor apart from the specified dimension at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCTensorScatterGather.cu:29

what is resnet101_caffe.pth

when i run 'python trainval_net.py --cuda'
IOError: [Errno 2] No such file or directory: 'data/pretrained_model/resnet101_caffe.pth'
what is resnet101_caffe.pth,thank you~

TypeError: load() got an unexpected keyword argument 'encoding'

Traceback (most recent call last):
File "test_net.py", line 329, in
imdb.evaluate_detections(all_boxes, output_dir)
File "/home/wangzhaowei/pycharm/fpn.pytorch-master/lib/datasets/pascal_voc.py", line 349, in evaluate_detections
self._do_python_eval(output_dir)
File "/home/wangzhaowei/pycharm/fpn.pytorch-master/lib/datasets/pascal_voc.py", line 312, in _do_python_eval
use_07_metric=use_07_metric)
File "/home/wangzhaowei/pycharm/fpn.pytorch-master/lib/datasets/voc_eval.py", line 126, in voc_eval
recs = pickle.load(f, encoding='bytes')
TypeError: load() got an unexpected keyword argument 'encoding'

why it happen?
I use python2\torch0.4.0

Cuda Error: invalid argument nms_cuda_kernel.cu

I'm experiencing a CUDA error when running the scripts.
There errors occurs here. in lib/model/nms/src/nms_cuda_kernel.cu.
The error is:
CUDA Error: invalid argument, at line 147
CUDA Error: invalid argument, at line 154
which I assuming is referring to the line in the nms cuda kernel.

When executing nms in this line, of lib/model/rpn/proposal_layer_fpn.py
nms(torch.cat((proposals_single, scores_single), 1), nms_thresh).

as a check, I have ensured in the make.sh file I changed compute to my gtx1080. I'm using pytorch 0.4 with python 2.7.

I haven't worked with cuda files before so I'm not too sure how to solve this.

train on my dataset

if I want to use your code to train on my dataset, where should I modify?

Working ??

Is this a working repository of FPN or is it still under development ?

COMPILEERROR(I already change the sm_61 but error still happened)

Traceback (most recent call last):
File "build.py", line 35, in
ffi.build()
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 189, in build
_build_extension(ffi, cffi_wrapper_name, target_dir, verbose)
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/torch/utils/ffi/init.py", line 111, in _build_extension
outfile = ffi.compile(tmpdir=tmpdir, verbose=verbose, target=libname)
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/cffi/api.py", line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/cffi/recompiler.py", line 1520, in recompile
compiler_verbose, debug)
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/home/wrc/anaconda3/envs/pytorch0.4/lib/python3.6/site-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: CompileError: command 'gcc' failed with exit status 1

dimension out of range (expected to be in range of [-1, 0], but got 1)

Traceback (most recent call last):
File "trainval_net.py", line 330, in
roi_labels = FPN(im_data, im_info, gt_boxes, num_boxes)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/pci/save/Technology/Part-B/FPN/lib/model/fpn/fpn.py", line 187, in forward
rois, rpn_loss_cls, rpn_loss_bbox = self.RCNN_rpn(rpn_feature_maps, im_info, gt_boxes, num_boxes)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/pci/save/Technology/Part-B/FPN/lib/model/rpn/rpn_fpn.py", line 100, in forward
im_info, cfg_key, rpn_shapes))
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/home/pci/save/Technology/Part-B/FPN/lib/model/rpn/proposal_layer_fpn.py", line 122, in forward
output[i,:num_proposal,1:] = proposals_single
RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.