Code Monkey home page Code Monkey logo

graph-rcnn.pytorch's Introduction

graph-rcnn.pytorch

Pytorch code for our ECCV 2018 paper "Graph R-CNN for Scene Graph Generation"

Introduction

This project is a set of reimplemented representative scene graph generation models based on Pytorch 1.0, including:

Our reimplementations are based on the following repositories:

Why we need this repository?

The goal of gathering all these representative methods into a single repo is to establish a more fair comparison across different methods under the same settings. As you may notice in recent literatures, the reported numbers for IMP, MSDN, Graph R-CNN and Neural Motifs are usually confusing, especially due to the big gap between IMP style methods (first three) and Neural Motifs-style methods (neural motifs paper and other variants built on it). We hope this repo can establish a good benchmark for various scene graph generation methods, and contribute to the research community!

Checklist

  • Faster R-CNN Baseline (:balloon: 2019-07-04)
  • Scene Graph Generation Baseline (:balloon: 2019-07-06)
  • Iterative Message Passing (IMP) (:balloon: 2019-07-07)
  • Multi-level Scene Description Network (MSDN:no region caption) (:balloon: 2019-08-24)
  • Neural Motif (Frequency Prior Baseline) (:balloon: 2019-07-08)
  • Graph R-CNN (w/o relpn, GCNs) (:balloon: 2019-08-24)
  • Graph R-CNN (w relpn, GCNs) (:balloon: 2020-01-13)
  • Graph R-CNN (w relpn, aGCNs)
  • Neural Motif
  • RelDN (Graphical Contrastive Losses)

Benchmarking

Object Detection

source backbone model bs lr lr_decay [email protected] [email protected]:0.95
this repo Res-101 faster r-cnn 6 5e-3 70k,90k 24.8 12.8

Scene Graph Generation (Frequency Prior Only)

source backbone model bs lr lr_decay sgdet@20 sgdet@50 sgdet@100
this repo Res-101 freq 6 5e-3 70k,90k 19.4 25.0 28.5
motifnet VGG-16 freq - - - 17.7 23.5 27.6

* freq = frequency prior baseline

Scene Graph Generation (Joint training)

source backbone model bs lr lr_decay sgdet@20 sgdet@50 sgdet@100
this repo Res-101 vanilla 6 5e-3 70k,90k 10.4 14.3 16.8

Scene Graph Generation (Step training)

source backbone model bs lr [email protected] sgdet@20 sgdet@50 sgdet@100
this repo Res-101 vanilla 8 5e-3 24.2 10.5 13.8 16.1
this repo Res-101 imp 8 5e-3 24.2 16.7 21.7 25.2
motifnet VGG-16 imp - - - 14.6 20.7 24.5

* you can click 'this repo' in above table to download the checkpoints.

The above table shows that our reimplementation of baseline and imp algorithm match the performance reported in mofitnet.

Comparisons with other Methods

model bs lr [email protected] sgdet@20 sgdet@50 sgdet@100
vanilla 8 5e-3 24.2 10.5 13.8 16.1
imp 8 5e-3 24.2 16.7 21.7 25.2
msdn 8 5e-3 24.2 18.3 23.6 27.1
graph-rcnn(no att) 8 5e-3 24.2 18.8 23.7 26.2

* you can click 'model' in above table to download the checkpoints.

Accordingly, all models achieved significantly better numbers compared with those reported in the original papers. The main reason for these consistant improvements are due to the per-class NMS of object proposals before sending to relationship head. Also, we found the gap between different methods are also reduced significantly. Our model has similar performance to msdn, while better performance than imp.

Adding RelPN to other Methods

We added our RelPN to various algorithms and compared with the original version.

model relpn bs lr [email protected] sgdet@20 sgdet@50 sgdet@100
vanilla no 8 5e-3 24.2 10.5 13.8 16.1
vanilla yes 8 5e-3 24.2 12.3 15.8 17.7
imp no 8 5e-3 24.2 16.7 21.7 25.2
imp yes 8 5e-3 24.2 19.2 23.9 26.3
msdn no 8 5e-3 24.2 18.3 23.6 27.1
msdn yes 8 5e-3 24.2 19.2 23.8 26.2

* you can click 'model' in above table to download the checkpoints.

Above, we can see consistant improvements for different algorithms, which demonstrates the effeciveness of our proposed relation proposal network (RelPN).

Also, since much less object pairs (256, originally > 1k) are fed to relation head for predicate classification, the inference time for the models with RelPN is reduced significantly (~2.5 times faster)

Tips and Tricks

Some important observations based on the experiments:

  • Using per-category NMS is important!!!!. We have found that the main reason for the huge gap between the imp-style models and motif-style models is that the later used the per-category nms before sending the graph into the scene graph generator. Will put the quantitative comparison here.

  • Different calculations for frequency prior result in differnt results*. Even change a little bit to the calculation fo frequency prior, the performance of scene graph generation model vary much. In neural motiftnet, we found they turn on filter_non_overlap, filter_empty_rels to filter some triplets and images.

Installation

Prerequisites

  • Python 3.6+
  • Pytorch 1.0
  • CUDA 8.0+

Dependencies

Install all the python dependencies using pip:

pip install -r requirements.txt

and libraries using apt-get:

apt-get update
apt-get install libglib2.0-0
apt-get install libsm6

Data Preparation

  • Visual Genome benchmarking dataset:
Annotations Object Predicate
#Categories 150 50

First, make a folder in the root folder:

mkdir -p datasets/vg_bm

Here, the suffix 'bm' is in short of "benchmark" representing the dataset for benchmarking. We may have other format of vg dataset in the future, e.g., more categories.

Then, download the data and preprocess the data according following this repo. Specifically, after downloading the visual genome dataset, you can follow this guidelines to get the following files:

datasets/vg_bm/imdb_1024.h5
datasets/vg_bm/bbox_distribution.npy
datasets/vg_bm/proposals.h5
datasets/vg_bm/VG-SGG-dicts.json
datasets/vg_bm/VG-SGG.h5

The above files will provide all the data needed for training the object detection models and scene graph generation models listed above.

  • Visual Genome bottom-up and top-down dataset:
Annotations Object Attribute Predicate
#Categories 1600 400 20

Soon, I will add this data loader to train bottom-up and top-down model on more object/predicate/attribute categories.

  • Visual Genome extreme dataset:
Annotations Object Attribute Predicate
#Categories 2500 ~600 ~400

This data loader further increase the number of categories for training more fine-grained visual representations.

Compilation

Compile the cuda dependencies using the following commands:

cd lib/scene_parser/rcnn
python setup.py build develop

After that, you should see all the necessary components, including nms, roi_pool, roi_align are compiled successfully.

Train

Train object detection model:

  • Faster r-cnn model with resnet-101 as backbone:
python main.py --config-file configs/faster_rcnn_res101.yaml

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/faster_rcnn_res101.yaml

where NGPUS is the number of gpus available.

Train scene graph generation model jointly (train detector and sgg as a whole):

  • Vanilla scene graph generation model with resnet-101 as backbone:
python main.py --config-file configs/sgg_res101_joint.yaml --algorithm $ALGORITHM

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/sgg_res101_joint.yaml --algorithm $ALGORITHM

where NGPUS is the number of gpus available. ALGORIHM is the scene graph generation model name.

Train scene graph generation model stepwise (train detector first, and then sgg):

  • Vanilla scene graph generation model with resnet-101 as backbone:
python main.py --config-file configs/sgg_res101_step.yaml --algorithm $ALGORITHM

Multi-GPU training:

python -m torch.distributed.launch --nproc_per_node=$NGPUS main.py --config-file configs/sgg_res101_step.yaml --algorithm $ALGORITHM

where NGPUS is the number of gpus available. ALGORIHM is the scene graph generation model name.

Evaluate

Evaluate object detection model:

  • Faster r-cnn model with resnet-101 as backbone:
python main.py --config-file configs/faster_rcnn_res101.yaml --inference --resume $CHECKPOINT

where CHECKPOINT is the iteration number. By default it will evaluate the whole validation/test set. However, you can specify the number of inference images by appending the following argument:

--inference $YOUR_NUMBER

⚠️ If you want to evaluate the model at your own path, just need to change the MODEL.WEIGHT_DET to your own path in faster_rcnn_res101.yaml.

Evaluate scene graph frequency baseline model:

In this case, you do not need any sgg model checkpoints. To get the evaluation result, object detector is enough. Run the following command:

python main.py --config-file configs/sgg_res101_{joint/step}.yaml --inference --use_freq_prior

In the yaml file, please specify the path MODEL.WEIGHT_DET for your object detector.

Evaluate scene graph generation model:

  • Scene graph generation model with resnet-101 as backbone:
python main.py --config-file configs/sgg_res101_{joint/step}.yaml --inference --resume $CHECKPOINT --algorithm $ALGORITHM
  • Scene graph generation model with resnet-101 as backbone and use frequency prior:
python main.py --config-file configs/sgg_res101_{joint/step}.yaml --inference --resume $CHECKPOINT --algorithm $ALGORITHM --use_freq_prior

Similarly you can also append the ''--inference $YOUR_NUMBER'' to perform partially evaluate.

⚠️ If you want to evaluate the model at your own path, just need to change the MODEL.WEIGHT_SGG to your own path in sgg_res101_{joint/step}.yaml.

Visualization

If you want to visualize some examples, you just simple append the command with:

--visualize

Citation

@inproceedings{yang2018graph,
    title={Graph r-cnn for scene graph generation},
    author={Yang, Jianwei and Lu, Jiasen and Lee, Stefan and Batra, Dhruv and Parikh, Devi},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    pages={670--685},
    year={2018}
}

Acknowledgement

We appreciate much the nicely organized code developed by maskrcnn-benchmark. Our codebase is built mostly based on it.

graph-rcnn.pytorch's People

Contributors

bernhardschaefer avatar jaesuny avatar jnhwkim avatar jwyang avatar zc-alexfan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graph-rcnn.pytorch's Issues

how to use

Nice work! Could you provide the instructions on how to use your code? Thanks!

Lower performance using multi-gpu

I trained detectors on single and 6 gpus using the commands in README.
The detector trained on multi-gpu shows lower performance than single.
Do you know why?

The performance is as follows:
Single

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.133
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.259
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.121
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.016
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.058
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.152
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.211
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.313
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.316
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.016
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.192
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.346

6 GPUs

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.090
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.188
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.075
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.015
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.037
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.103
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.164
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.241
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.243
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.014
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.129
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.267

Questions to get the qualitative results in the paper.

I am studying on exploiting outputs of scene graph model (as like object i, relationship, object j )
so I modified the code that you provided.

In specific, in the model.py

I used two variables box "output", "output_pred"

In finally, as I wanted to get the qualitative results, I used top_predictions variable by implementing the function of "select_top_predictions".

But in the output_pred, there are all relationships between objects.

some of the results:

train, behind, sidewalk: 0.02387198619544506
train, at, roof: 0.024028906598687172
train, behind, building: 0.023647857829928398
train, between, window: 0.023976245895028114
train, between, pole: 0.0235282052308321
train, at, house: 0.023929819464683533
train, at, building: 0.023660259321331978
train, between, person: 0.023870430886745453
train, behind, windshield: 0.023930499330163002
train, between, pole: 0.023856913670897484
train, behind, sign: 0.02349434420466423
train, between, track: 0.024171145632863045
train, at, window: 0.02370303124189377
train, at, roof: 0.02452095039188862
train, behind, sign: 0.02410005033016205
train, behind, light: 0.023617111146450043
train, between, engine: 0.023823242634534836
train, between, window: 0.024225767701864243
train, between, tree: 0.023836513981223106
train, between, track: 0.024690095335245132
train, at, light: 0.0236750990152359
train, between, pole: 0.02389519289135933
train, between, track: 0.024441653862595558
train, between, window: 0.0242259930819273
sidewalk, behind, train: 0.02439986914396286
sidewalk, at, roof: 0.02377551607787609
sidewalk, behind, building: 0.02405555732548237
sidewalk, behind, window: 0.024234943091869354
sidewalk, behind, pole: 0.024084312841296196
sidewalk, behind, house: 0.0239116158336401
sidewalk, behind, building: 0.02355070225894451
sidewalk, behind, person: 0.023942159488797188
sidewalk, at, windshield: 0.024083049967885017
sidewalk, behind, pole: 0.02383407950401306
sidewalk, behind, sign: 0.023682253435254097
sidewalk, behind, track: 0.024120213463902473
sidewalk, at, window: 0.02394690178334713
sidewalk, at, roof: 0.024383747950196266
sidewalk, behind, sign: 0.024204667657613754
sidewalk, behind, light: 0.024269500747323036
sidewalk, behind, engine: 0.023881277069449425
sidewalk, behind, window: 0.023806318640708923
sidewalk, behind, tree: 0.024139899760484695
sidewalk, behind, track: 0.024355396628379822
sidewalk, behind, light: 0.02410137839615345
sidewalk, behind, pole: 0.02373342402279377
sidewalk, at, track: 0.024115504696965218
sidewalk, between, window: 0.023614520207047462
roof, behind, train: 0.02425987459719181
roof, behind, sidewalk: 0.024073176085948944
roof, behind, building: 0.02399829588830471
roof, at, window: 0.025182323530316353
roof, behind, pole: 0.024173879995942116
roof, behind, house: 0.026239316910505295
roof, behind, building: 0.02548844739794731
roof, behind, person: 0.02390027791261673
roof, at, windshield: 0.025229210034012794
roof, behind, pole: 0.023827891796827316
roof, behind, sign: 0.024496659636497498
roof, at, track: 0.025029385462403297
roof, at, window: 0.02551886811852455
roof, behind, roof: 0.025152400135993958
roof, behind, sign: 0.02517804317176342
roof, behind, light: 0.02423752285540104
roof, at, engine: 0.02450089529156685
roof, behind, window: 0.02405671961605549
roof, behind, tree: 0.026544684544205666
roof, behind, track: 0.024483991786837578
roof, behind, light: 0.024766413494944572
roof, between, pole: 0.023709215223789215
roof, at, track: 0.024978386238217354
roof, at, window: 0.024174179881811142
building, behind, train: 0.024330293759703636
building, behind, sidewalk: 0.02421494759619236
building, behind, roof: 0.02382979914546013
building, behind, window: 0.02455325610935688
building, behind, pole: 0.024986734613776207
building, behind, house: 0.02400863729417324
building, behind, building: 0.02364060841500759
building, behind, person: 0.024538518860936165
building, behind, windshield: 0.024235431104898453
building, behind, pole: 0.024516766890883446
building, behind, sign: 0.02449074201285839
building, behind, track: 0.02405826933681965
building, behind, window: 0.023972755298018456
building, at, roof: 0.024223698303103447
building, behind, sign: 0.024951158091425896
building, behind, light: 0.025016330182552338
building, behind, engine: 0.023686831817030907
building, behind, window: 0.024335479363799095
building, behind, tree: 0.02420300617814064
building, behind, track: 0.02444702386856079
building, behind, light: 0.024794165045022964
building, between, pole: 0.024759920313954353
building, behind, track: 0.0242274422198534
building, between, window: 0.024014681577682495
window, behind, train: 0.02432013303041458
window, behind, sidewalk: 0.02410159632563591
window, at, roof: 0.02584269642829895
window, behind, building: 0.0242571122944355
window, behind, pole: 0.024228986352682114

As you see, most of the results have no meaning from the scene.
I think I need to pre-process something that but, I can't come up with an idea to deal with this problem.
detection_0

run error

大佬,求教...这个代码运行起来,第一个import init_paths就报错了,显示no modul named init_path. 小白跪谢。。。

Difference between roi_box_head and roi_relation_head

Hi, I have a question regarding the difference between roi_box_head and roi_relation_head. Specifically here:

We have:

    if not cfg.MODEL.RPN_ONLY:
        roi_heads.append(("box", build_roi_box_head(cfg, in_channels)))
    if cfg.MODEL.RELATION_ON:
        roi_heads.append(("relation", build_roi_relation_head(cfg, in_channels)))

When I followed the two build_roi_box_head and build_roi_relation_head, there were methods self.predictor where they seem to be exactly the same except for that one takes as input ROI_RELATION_HEAD.NUM_CLASSES and one takes ROI_BOX_HEAD.NUM_CLASSES in which both cases are "81". Are these two heads set for object detectors and relation detectors respectively? Meaning does the first one work on object bounding boxes while the second one takes the "union of bounding boxes" as for the predicate? If so, then why are both 81? shouldn't one be equal to number of object classes and the other one equal to number of predicate classes?

Thanks.

Report for `Unconstrained` scores

Many recent papers start to report the graph Unconstrained scores.

Two evaluation protocols are used in the literature differing in whether they enforce graph constraints over model predictions. The first graph-constrained protocol requires that the top-K triplets assign one consistent class per entity and relation. The second unconstrained protocol does not enforce any such constraints. (Herzig & Raboh et al., 2018)

My interpretation is Unconstrained scores are relaxed to get a point to match one of the annotations per triplet when calculating Recall@K if there is any in the dataset.

Cant Pickle Local object

Hi, I've imported the mini_VG imdb as h5 files, and am trying to use the training script
(from the zip that was shared in #26 (comment))
but get this error; any idea whats wrong?

File "C:\Program Files\Python36\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'vg_hdf5.gt_roidb. locals .lambda'

Relationship dictionary

I confirmed the object dictionary in dataset.ind_to_classes.
but I cannot find the relationship dictionary.

Could anyone let me know?

Is Graph-RCNN ready?

Hi,

It has been a while since the last commit update. I am wondering if this repo has Graph-RCNN implemented already? (since Graph RCNN in the checklist of README.md is not checked).

Train this model... but OOM

I want to train this model.
But I encounter out of memory.
I use GTX Titan X.

I think that exist memory leak...

Which GPU is right for this model?

not compiled with GPU support nms

Error seen when I try to train the scene graph parser

This is after I follow the instruction to compile the cuda dependencies.
cd lib/scene_parser/rcnn
python setup.py build develop
python main.py --config-file configs/baseline_res101.yaml

Traceback (most recent call last): File "main.py", line 125, in <module> main() File "main.py", line 120, in main model = train(cfg, args) File "main.py", line 69, in train model.train() File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/model.py", line 128, in train loss_dict = self.scene_parser(imgs, targets) File "/home/dalinw/.conda/envs/torch1.1_python3.7_cuda10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 47, in forward proposals, proposal_losses = self.rpn(images, features, targets) File "/home/dalinw/.conda/envs/torch1.1_python3.7_cuda10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/rpn/rpn.py", line 159, in forward return self._forward_train(anchors, objectness, rpn_box_regression, targets) File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/rpn/rpn.py", line 175, in _forward_train anchors, objectness, rpn_box_regression, targets File "/home/dalinw/.conda/envs/torch1.1_python3.7_cuda10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/rpn/inference.py", line 140, in forward sampled_boxes.append(self.forward_for_single_feature_map(a, o, b)) File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/rpn/inference.py", line 120, in forward_for_single_feature_map score_field="objectness", File "/home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/structures/boxlist_ops.py", line 27, in boxlist_nms keep = _box_nms(boxes, score, nms_thresh) RuntimeError: Not compiled with GPU support (nms at /home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/csrc/nms.h:22) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x1500a3f60dc5 in /home/dalinw/.conda/envs/torch1.1_python3.7_cuda10/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: nms(at::Tensor const&, at::Tensor const&, float) + 0xb2 (0x15008d9debc2 in /home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/_C.cpython-37m-x86_64-linux-gnu.so) frame #2: <unknown function> + 0x16f36 (0x15008d9ebf36 in /home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/_C.cpython-37m-x86_64-linux-gnu.so) frame #3: <unknown function> + 0x16fbe (0x15008d9ebfbe in /home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/_C.cpython-37m-x86_64-linux-gnu.so) frame #4: <unknown function> + 0x1460d (0x15008d9e960d in /home/dalinw/DeployedProjects/ml_models/models/pygcn/visual_genome_deterministic/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/_C.cpython-37m-x86_64-linux-gnu.so) <omitting python frames> frame #63: __libc_start_main + 0xf5 (0x1500e80c6495 in /lib64/libc.so.6)

Can you take a look this error please? @jwyang

Report metrics `motif` and `IMP`

@jwyang

  1. I saw that the evaluation report includes motif and IMP labeled scores. What does it mean? motif scores were slightly higher than IMP's.
  2. The scores reported in README.md are which ones, motif or IMP?

Restoring from checkpoint

How do I restore the model from the checkpoint that I downloaded from the "this repo" link in your README? Where should I put the path to the checkpoint? Thank you!

Still having issue with "No module named 'maskrcnn_benchmark"

I ran the following command to train the scene graph parser as instructed in the readme:
python main.py --config-file configs/baseline_res101.yaml

/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/utils/c2_model_loading.py", line 8, in <module> from maskrcnn_benchmark.utils.model_serialization import load_state_dict ModuleNotFoundError: No module named 'maskrcnn_benchmark'

@jwyang Can you take a look please? Thanks a lot!

Original file of co_nms.py and _nms.so

We recently get an original version of your code from one of our friends in Tencent AI lab, however, when we want to reuse the co_nms which is utilized to calculate the IOU between two pairs and filter the object pairs, we found some of the original files are missing. Could you give the original file of co_nms to us? My email address is [email protected].

Thanks a lot

BBox regression and Object proposals

  1. In the Implementation Details part, the 256 object proposals are from RPN, but from Relation proposal Network part, the class distributions P^O seem to come from the faster RCNN predictions, because RPN does not provide them. So which is right? Where are the object proposals from, RPN or faster RCNN final bbox?
  2. In the loss part, you do not mention the bbox regression loss in Faster RCNN. Do you mean that the bbox is only regressed during anchor stage?

No module named 'lib.scene_parser.msdn'

I executed this command $ python main.py --config-file configs/baseline_res101.yaml and got the following error:

Traceback (most recent call last):
File "main.py", line 14, in
from lib.model import build_model
File "/graph-rcnn.pytorch-master/lib/model.py", line 9, in
from .scene_parser.parser import build_scene_parser
File "graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 16, in
from .msdn.msdn import MSDN
ModuleNotFoundError: No module named 'lib.scene_parser.msdn'

Please take a look @jwyang thanks again!

Some general clarification questions

@jwyang Thanks for the great repo!

I am a bit confused about which one is which in the pre-trained models, can you please specify which paper corresponds to which pre-trained model. It seems the first two are either IMP or MSDN but I am not so sure.

Also, what is the timeline for releasing the graph-rcnn model?

ValueError: need at least one array to stack

Traceback (most recent call last):
File "main.py", line 125, in
main()
File "main.py", line 120, in main
model = train(cfg, args)
File "main.py", line 68, in train
model = build_model(cfg, arguments, args.local_rank, args.distributed)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/model.py", line 304, in build_model
return SceneGraphGeneration(cfg, arguments, local_rank, distributed)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/model.py", line 31, in init
self.data_loader_train = build_data_loader(cfg, split="train", is_distributed=distributed)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/data/build.py", line 60, in build_data_loader
dataset = vg_hdf5(cfg, split=split, transforms=transforms, num_im=num_im)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/data/vg_hdf5.py", line 54, in init
filter_non_overlap=filter_non_overlap and split == "train",
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/data/vg_hdf5.py", line 287, in load_graphs
im_sizes = np.stack(im_sizes, 0)
File "/home/jungjunkim/anaconda3/envs/graph-rcnn/lib/python3.6/site-packages/numpy/core/shape_base.py", line 412, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

I think visual genome dataset path caused this problem.
where do I set the image dataset path?

Question about performance on the reimplemented neural motif & frequency baseline

Hi, thank you very much for this great repository. After reading your paper, I am confused by the big gap between your reimplementation of neural motif (and its frequency baseline) and the numbers reported in the original neural motif paper. What do you think that may cause such big difference? I read some other recent papers that also have slight improvement over neural-motif (recall@100 ~0.68 on predcls), do you think their papers also include some tricks not related to their actual contributions that may contribute significantly to their high performance? Thank you very much.

Learning Separate Transformation Matrix for Different Nodes

❓ Questions & Help

I am working on an application in which all examples have the same graph structure.

Most graph network (such as GCNConv) use a shared transformation matrix for all nodes. However, one could leverage the fixed graph structure of each example by either:

  1. Learn different transformation matrix for each node.
    OR
  2. Learn some kind of attention (aka probability mask) over neighbours of a node for aggregation (such as attentional GCN in Graph RCNN).
    OR
  3. An approximation of 1.

I am wondering if you came across models that leverage the relationships between nodes in a dataset which always has the same graph?

Multi-gpu inference

enable multi-gpu inference in the same way of training, but currently not supported.

Running on Custom Dataset

Interested in this work, kindly provide necessary instructions and scripts for running Graph rcnn on custom datasets.

need at least one array to stack

I run the code with python main.py --config-file configs/faster_rcnn_res101.yaml command, and using Mini VG dataset. There is an error saying:
Traceback (most recent call last):
File "main.py", line 127, in
main()
File "main.py", line 122, in main
model = train(cfg, args)
File "main.py", line 68, in train
model = build_model(cfg, arguments, args.local_rank, args.distributed)
File "/home/chlorane/graphrcnn/lib/model.py", line 307, in build_model
return SceneGraphGeneration(cfg, arguments, local_rank, distributed)
File "/home/chlorane/graphrcnn/lib/model.py", line 31, in init
self.data_loader_train = build_data_loader(cfg, split="train", is_distributed=distributed)
File "/home/chlorane/graphrcnn/lib/data/build.py", line 60, in build_data_loader
dataset = vg_hdf5(cfg, split=split, transforms=transforms, num_im=num_im)
File "/home/chlorane/graphrcnn/lib/data/vg_hdf5.py", line 56, in init
filter_non_overlap=filter_non_overlap and split == "train",
File "/home/chlorane/graphrcnn/lib/data/vg_hdf5.py", line 287, in load_graphs
im_sizes = np.stack(im_sizes, 0) #im_sizes void
File "/home/chlorane/anaconda3/lib/python3.7/site-packages/numpy/core/shape_base.py", line 412, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

I checked the code in vg_hdf5.py, in function ```
load_graphs(graphs_file, images_file, mode='train', num_im=-1, num_val_im=0, filter_empty_rels=True,
filter_non_overlap=False):

`split_mask = data_split == split` 
#split_mask all false

    `split_mask &= roi_h5['img_to_first_box'][:] >= 0` 
#split_mask all false
    ```
if filter_empty_rels:
        split_mask &= roi_h5['img_to_first_rel'][:] >= 0
```  #split_mask all false
    `image_index = np.where(split_mask)[0]` 
#np.where(split_mask) void

`for i in range(len(image_index)):` 
#image_index void
`im_sizes = np.stack(im_sizes, 0)` 
#im_sizes void


How to solve this problem?

some questions...

How the machine read the scene graph checkpoint path ? I cannot find the code line...
and What command line did you put? Is this right? python main.py --config-file configs/baseline_res101.yaml --inference --resume $CHECKPOINT

and how to change the batch size?

yaml for `freq` model

Could you update YAML file for freq model?

ALGORITHM: "sg_baseline"
USE_FREQ_PRIOR: True

I am not sure if the above option simply works for freq.

Question about stage-wise training

I'm a bit confused about how scene graph generation training stage use pretrained faster rcnn model.
Could you please point me towards which lines that indicate joint training? Or should I change some configurations?
Thanks in advance

VGG Checkpoint

Thanks for this great repository.

In the ECCV paper, you used a VGG backbone. Could you please share its checkpoints? I understand you have provided a better ResNet backbone already, but I need to compare with the numbers you reported in the paper, and hence I need the original model's checkpoint.

Thanks very much for helping.

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

s/faster_rcnn_res101.yaml --inference --resume 1
2019-07-23 17:17:18,051 scene_graph_generation INFO: Namespace(config_file='configs/faster_rcnn_res101.yaml', distributed=False, inference=True, instance=-1, local_rank=0, resume=1, use_freq_prior=False, visualize=False)
2019-07-23 17:17:18,052 scene_graph_generation INFO: Loaded configuration file configs/faster_rcnn_res101.yaml
2019-07-23 17:17:18,052 scene_graph_generation INFO: Saving config into: logs/config.yml
Traceback (most recent call last):
File "main.py", line 125, in
main()
File "main.py", line 122, in main
test(cfg, args)
File "main.py", line 79, in test
model = build_model(cfg, arguments, args.local_rank, args.distributed)
File "/media/ailab/HDD/graph-rcnn.pytorch/lib/model.py", line 305, in build_model
return SceneGraphGeneration(cfg, arguments, local_rank, distributed)
File "/media/ailab/HDD/graph-rcnn.pytorch/lib/model.py", line 32, in init
self.data_loader_train = build_data_loader(cfg, split="train", is_distributed=distributed)
File "/media/ailab/HDD/graph-rcnn.pytorch/lib/data/build.py", line 60, in build_data_loader
dataset = vg_hdf5(cfg, split=split, transforms=transforms, num_im=num_im)
File "/media/ailab/HDD/graph-rcnn.pytorch/lib/data/vg_hdf5.py", line 54, in init
filter_non_overlap=filter_non_overlap and split == "train",
File "/media/ailab/HDD/graph-rcnn.pytorch/lib/data/vg_hdf5.py", line 241, in load_graphs
im_widths = im_h5["image_widths"][split_mask]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/home/ailab/anaconda2/envs/grcn/lib/python3.6/site-packages/h5py/_hl/dataset.py", line 533, in getitem
if args == (Ellipsis,) or args == tuple():
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

'
'
'
'
what can I do..?

AttributeError: 'Baseline' object has no attribute 'feature_extractor'

Hi , jwyang.
I have a question. I would like to evaluate

Scene Graph Generation (Joint training) - vanilla
so I downloaded the checkpoint that you uploaded.
and I put this file in graph-rcnn.pytorch-master/checkpoints/vg_benchmark_object/R-101-C4/sg_baseline_joint_2 /BatchSize_2/Base_LR_0.005/checkpoint_0000002.pth

and run the code with the command " python main.py --config-file configs/sgg_res101_joint.yaml --inference --resume 2 --algorithm sg_baseline"

but I got this error the below.
2019-08-26 08:48:30,918 scene_graph_generation INFO: Namespace(algorithm='sg_baseline', config_file='configs/sgg_res101_joint.yaml', distributed=False, inference=True, instance=-1, local_rank=0, resume=2, use_freq_prior=False, visualize=False)
2019-08-26 08:48:30,919 scene_graph_generation INFO: Loaded configuration file configs/sgg_res101_joint.yaml
2019-08-26 08:48:30,919 scene_graph_generation INFO: Saving config into: logs/config.yml
images_per_batch: 2, num_gpus: 1
images_per_batch: 1, num_gpus: 1
2019-08-26 08:48:44,960 scene_graph_generation.trainer INFO: Train data size: 56224
2019-08-26 08:48:44,960 scene_graph_generation.trainer INFO: Test data size: 26446
Traceback (most recent call last):
File "main.py", line 127, in
main()
File "main.py", line 124, in main
test(cfg, args)
File "main.py", line 79, in test
model = build_model(cfg, arguments, args.local_rank, args.distributed)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/model.py", line 363, in build_model
return SceneGraphGeneration(cfg, arguments, local_rank, distributed)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/model.py", line 50, in init
self.scene_parser = build_scene_parser(cfg); self.scene_parser.to(self.device)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 144, in build_scene_parser
return SceneParser(cfg)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 27, in init
self.rel_heads = build_roi_relation_head(cfg, self.backbone.out_channels)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/relation_heads/relation_heads.py", line 142, in build_roi_relation_head
return ROIRelationHead(cfg, in_channels)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/relation_heads/relation_heads.py", line 30, in init
self.rel_predictor = build_baseline_model(cfg, in_channels)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/relation_heads/baseline/baseline.py", line 31, in build_baseline_model
return Baseline(cfg, in_channels)
File "/media/jungjunkim/Dataset_SSD_500GB/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/relation_heads/baseline/baseline.py", line 17, in init
self.predictor = make_roi_relation_predictor(cfg, self.feature_extractor.out_channels)
File "/home/jungjunkim/anaconda3/envs/graph-rcnn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 535, in getattr
type(self).name, name))
AttributeError: 'Baseline' object has no attribute 'feature_extractor'

I think this error from the problem of checkpoint path...
but I don't know how to set the path in this case.
thank you.

Is pre-trained model available?

Hi,

Thanks for the great work and releasing the codes.

Is pre-trained model available?
What is ETA for full release of the codes?

Thanks much!
Hamid

ModuleNotFoundError: No module named 'maskrcnn_benchmark'

Traceback (most recent call last):
File "main.py", line 14, in
from lib.model import build_model
File "/root/graph-rcnn.pytorch-master/lib/model.py", line 9, in
from .scene_parser.parser import build_scene_parser
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 9, in
from .rcnn.modeling.detector.generalized_rcnn import GeneralizedRCNN
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/detector/generalized_rcnn.py", line 13, in
from ..roi_heads.roi_heads import build_roi_heads
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/roi_heads.py", line 4, in
from .box_head.box_head import build_roi_box_head
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/box_head/box_head.py", line 6, in
from .roi_box_predictors import make_roi_box_predictor
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/box_head/roi_box_predictors.py", line 2, in
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/box_head/roi_box_predictors.py", line 2, [0/1964]le>
from maskrcnn_benchmark.modeling import registry
ModuleNotFoundError: No module named 'maskrcnn_benchmark'
Traceback (most recent call last):
File "main.py", line 14, in
from lib.model import build_model
File "/root/graph-rcnn.pytorch-master/lib/model.py", line 9, in
from .scene_parser.parser import build_scene_parser
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/parser.py", line 9, in
from .rcnn.modeling.detector.generalized_rcnn import GeneralizedRCNN
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/detector/generalized_rcnn.py", line 13, in
from ..roi_heads.roi_heads import build_roi_heads
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/roi_heads.py", line 4, in
from .box_head.box_head import build_roi_box_head
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/box_head/box_head.py", line 6, in
from .roi_box_predictors import make_roi_box_predictor
File "/root/graph-rcnn.pytorch-master/lib/scene_parser/rcnn/modeling/roi_heads/box_head/roi_box_predictors.py", line 2, in
from maskrcnn_benchmark.modeling import registry
ModuleNotFoundError: No module named 'maskrcnn_benchmark'

i'm using python3.6, pytorch1.1, cuda10.1, NVIDIA-SMI 418.43
Do anyone why knows that this bug comes out?
Any help is appreciated!!

mAP

Would you mind telling that what's the mAP of your pretrained detector on Visual Genome? Mine is about 20%. Is it reasonable?

Dataset

I am wondering whether the dataset you used is the same as the IMP or not. I mean you have not done any further cleaning procedure for the VG-IMP, but just directly used the cleaning version borrowed from IMP. Furthermore, According to your experiment result in Table 2, although the reimplemented IMP, MSDN can achieve a better performance in SGGEN and PhrCls, the PredCls performance is downgraded with 8%. Could you help me to figure it out?

Question about the attentional GCN module

Hey @jwyang, I highly appreciate your work in completing this repository.
I want to use the aGCN architecture you proposed for my project, but I couldn't find the exact module/function where it is defined. Could you maybe point me towards that module?

Thank you for your help,
Sandeep.

about node update (weight)

Hello, thank you for your work with graph-rcnn.
I'm trying to implement aGCN and use it for my project, but i have a minor issue.

before applying attention, you updated node like,
node_update

Did you use different weight params for every W^{skip, sr, or, rs, ro}?
I'm new to GCN so I'm confused.
Did you define separate Parameter and independently forward it?

What I mean by weight is like below, the weight in GCN layers
https://github.com/tkipf/pygcn/blob/master/pygcn/layers.py#L18
self.weight = Parameter(torch.FloatTensor(in_features, out_features))

If you give a little guide for me, I'll really appreciate it.
Also I'm really looking forward to the full release of this project!

Best,
Ahyun

Out of memory when batch size = 2

Greatly thank you for your work in this repo!
I'm using one 11GB memory 2080Ti GPU. But only when I set batch_size = 1, out of memory problem in object detection training and scene graph generation training won't occur, otherwise: CUDA out of memory. Tried to allocate 294.00 MiB (GPU 0; 10.73 GiB total capacity; 9.28 GiB already allocated; 280.62 MiB free; 95.31 MiB cached)

Even when batch size = 1, after 55000 iters training, OOM is suddenly thrown out.

I'm using vg-bm dataset as README.md instructs.

Is it reasonable?

Question about relationship

thank you for opening good code!
I have a question.
I implemented your code successfully and viusalized.
but in the visualization image, there is no relationship name.
so I modified your code to get triplet(object1, relationship, object2)

and I got the result.
(I set the relation score threshold = 0.03 as compared to yours 0
sg_eval.py]
line 45 (it may be some different)
rel_score_threshold = float(0.03)
sorted_inds = np.argsort(-scores)
sorted_inds = sorted_inds[scores[sorted_inds] > rel_score_threshold] #[:100]
)

, I used top_prediction object labels and got 923 top triplets
I showed you the part of result in summary.

pant_wearing_jacket
street_on_bus
bus_on_street
sign_on_street
street_on_street
pole_on_street
street_on_sign
pant_wearing_woman
pant_on_street
man_wearing_pant
door_on_bus
street_on_bag
pole_on_sign
woman_wearing_bag
sign_on_bag
bag_wearing_man
jacket_on_street
windshield_on_street
jacket_wearing_woman
jacket_wearing_man
bus_on_bus
sign_on_street
window_on_bus
door_on_street
tire_on_bus
pant_on_bus
window_on_bus
woman_on_street
street_on_window
man_on_street
sign_on_bag
pole_on_sign
bus_on_window
bus_on_sidewalk
sidewalk_on_bus
street_on_sidewalk
tire_on_street
bus_on_sign
woman_wearing_woman
man_wearing_woman
bus_on_light
window_on_street
light_on_street
window_on_door
door_on_window
street_on_window
pant_wearing_person
sign_on_sign
tree_on_street
sign_on_windshield
light_on_bus
bag_on_bus
sign_on_light
bus_on_door
bag_wearing_jacket
person_wearing_bag
street_on_window
bus_on_man
roof_on_street
person_wearing_jacket
pole_on_bus
bus_on_windshield
bus_on_woman
bus_on_jacket
sign_on_window
windshield_on_light
window_on_sidewalk
pole_on_bus
sign_on_sidewalk
sidewalk_on_pant
sign_on_sign
door_on_sidewalk
street_on_building
sidewalk_on_windshield
bag_on_sidewalk
sign_on_window
man_on_bus
sidewalk_on_sign
window_on_door
pant_on_bus
pole_on_sidewalk
street_on_person
window_on_sign
pant_on_roof
window_on_bus
sign_on_bus
pant_wearing_pole
hair_on_street
tire_on_sidewalk
sign_on_sign
bus_on_bag
sign_on_bus
bus_on_woman
man_wearing_person
sidewalk_on_jacket
woman_wearing_person
sign_on_light
pole_on_bag
pole_on_window
bus_on_windshield
door_on_street
bus_on_tree
bag_on_window
tire_on_door
tire_on_bus
street_on_bus
sign_on_windshield
hair_wearing_man
sidewalk_on_sign
bus_on_window
hair_wearing_woman
pant_on_building
hair_wearing_pant
pant_wearing_door
pole_on_street
window_on_sign
pant_wearin

detection_0

I think it is something different from your paper and the most relationship consist of 'on'.
which variable I have to use to get relationship efficiently corresponding to top_predictions?

thank you Jianwei Yang.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.