Code Monkey home page Code Monkey logo

fgvc-pim's Introduction

A Novel Plug-in Module for Fine-grained Visual Classification

PWC

PWC

paper url: https://arxiv.org/abs/2202.03822

We propose a novel plug-in module that can be integrated to many common backbones, including CNN-based or Transformer-based networks to provide strongly discriminative regions. The plugin module can output pixel-level feature maps and fuse filtered features to enhance fine-grained visual classification. Experimental results show that the proposed plugin module outperforms state-ofthe-art approaches and significantly improves the accuracy to 92.77% and 92.83% on CUB200-2011 and NABirds, respectively.

framework

1. Environment setting

// We move old version to ./v0/

1.0. Package

1.1. Dataset

In this paper, we use 2 large bird's datasets to evaluate performance:

1.2. Our pretrained model

1.3. OS

  • Windows10
  • Ubuntu20.04
  • macOS (CPU only)

2. Train

  • Single GPU Training
  • DataParallel (single machine multi-gpus)
  • DistributedDataParallel

(more information: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html)

2.1. data

train data and test data structure:

├── tain/
│   ├── class1/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   ├── class2/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   └── ....
└──

2.2. configuration

you can directly modify yaml file (in ./configs/)

2.3. run

python main.py --c ./configs/CUB200_SwinT.yaml

model will save in ./records/{project_name}/{exp_name}/backup/

2.4. about costom model

Building model refers to ./models/builder.py
More detail in how_to_build_pim_model.ipynb

2.5. multi-gpus

comment out main.py line 66

model = torch.nn.DataParallel(model, device_ids=None)

2.6. automatic mixed precision (amp)

use_amp: True, training time about 3-hours.
use_amp: False, training time about 5-hours.

3. Evaluation

If you want to evaluate our pretrained model (or your model), please give provide configs/eval.yaml (or costom yaml file is fine)

3.1. please check yaml

set yaml (configuration file)

Key Value Description
train_root ~ set value to ~ (null) means this is not in training mode.
val_root ../data/eval/ path to validation samples
pretrained ./pretrained/best.pt pretrained model path

../data/eval/ folder structure:

├── eval/
│   ├── class1/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   ├── class2/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   └── ....
└──

3.2. run

python main.py --c ./configs/eval.yaml

results will show in terminal and been save in ./records/{project_name}/{exp_name}/eval_results.txt

4. HeatMap

python heat.py --c ./configs/CUB200_SwinT.yaml --img ./vis/001.jpg --save_img ./vis/001/

visualization visualization2

5. Infer

If you want to reason your picture and get the confusion matrix, please give provide configs/eval.yaml (or costom yaml file is fine)

5.1. please check yaml

set yaml (configuration file)

Key Value Description
train_root ~ set value to ~ (null) means this is not in training mode.
val_root ../data/eval/ path to validation samples
pretrained ./pretrained/best.pt pretrained model path

../data/eval/ folder structure:

├── eval/
│   ├── class1/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   ├── class2/
│   |   ├── img001.jpg
│   |   ├── img002.jpg
│   |   └── ....
│   └── ....
└──

5.2. run

python infer.py --c ./configs/eval.yaml

results will show in terminal and been save in ./records/{project_name}/{exp_name}/infer_results.txt


Acknowledgment

  • Thanks to timm for Pytorch implementation.

  • This work was financially supported by the National Taiwan Normal University (NTNU) within the framework of the Higher Education Sprout Project by the Ministry of Education(MOE) in Taiwan, sponsored by Ministry of Science and Technology, Taiwan, R.O.C. under Grant no. MOST 110- 2221-E-003-026, 110-2634-F-003 -007, and 110-2634-F-003 -006. In addition, we thank to National Center for Highperformance Computing (NCHC) for providing computational and storage resources.

fgvc-pim's People

Contributors

chou141253 avatar renluxi avatar yuting21015 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fgvc-pim's Issues

some question about model code

作者您好,我想用你们的代码在我的数据集上进行训练和测试,但是我遇到了一些问题:
您在PluginMoodel中定义您的主要模型,在文件pim_module.py第330行有以下代码:
# get hidden feartues size
rand_in = torch.randn(1, 3, img_size, img_size)
outs = self.backbone(rand_in)
这个outs是一个tensor吧,但是后面用它作为字典进行查询,所以报了以下错误:
IndexError: tensors used as indices must be long, int, byte or bool tensors
另外FPN中也将outs作为字典操作,也报了这个错误。

How to train the model on my own dataset?

Thanks for your code, it is really a nice work!
However, I found multiple troubles when adapting the code to my own data.
I have already: 1) changed the class number in the config.py and set the args in CMD; 2) the inputted image size has been changed following former issues in this Github page.
QQ图片20220416123249
QQ图片20220416123320

However, after changing these two aspects, the results are still misleading and confusing.
So, may I ask, if using the model on one's own dataset, how many issues do I need to change?

NAbirds dataset

Hello, can you provide the data set of NAbirds? The official website can't download it. Thank you very much

The result when setting L_s as non-zero?

I have noticed that in your paper, the L_s is set as 0. It is consistent with the code:
def _select_loss(self, selected_logits, labels): loss1 = 0

But I am curious about the experiment result of a non-zero L_s. Does the accuracy decrease evidently?

How to run HeatMap with your best pretrained NABirds model?

Thank you so much for the beautiful code.
I'm trying to use your pretrained model best.pt on NABirds dataset.

First, I set the PATH to the pretrained model in NABirds_SwinT.yaml
then I run:

python heat.py --c ./configs/NABirds_SwinT.yaml --img ./vis/001.jpg --save_img ./vis/001/

But I get errors:


Building...
Traceback (most recent call last):
File "C:\Users\xxxxxx\Desktopxxxxxx\heat.py", line 100, in
model.load_state_dict(checkpoint['model'])
File "C:\Users\xxxxxx\anaconda3\envs\xxxxxxx\lib\site-packages\torch\nn\modules\module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PluginMoodel:
Unexpected key(s) in state_dict: "combiner.conv_qk1.weight", "combiner.conv_qk1.bias".
size mismatch for combiner.adj1: copying a param with shape torch.Size([85, 85]) from checkpoint, the shape in current model is torch.Size([15, 15]).
size mismatch for combiner.param_pool0.weight: copying a param with shape torch.Size([85, 2720]) from checkpoint, the shape in current model is torch.Size([15, 480]).
size mismatch for combiner.param_pool0.bias: copying a param with shape torch.Size([85]) from checkpoint, the shape in current model is torch.Size([15]).
size mismatch for combiner.param_pool1.weight: copying a param with shape torch.Size([1, 85]) from checkpoint, the shape in current model is torch.Size([1, 15]).


It seems that your best,pt model is not using default SwinT model, the num_selects is not matching
And there are unexpected keys in your best.pt model: "combiner.conv_qk1.weight", "combiner.conv_qk1.bias"

I wish to know what modification I should make to load your pretrained best.pt.

Questions regarding inference

Hi, I am running a test on your repo with Stanford Dog Dataset which has 120 species. The model trained really well, but I am a little confused with your inference pipeline. I just want to run a inference on a single image so I am referring to your eval.py and plot_heat.py at the moment.

Your eval.py seems to be calling SwinVit12, but plot_heat.py seems to be calling SwinVit12_demo. Are there difference between the two?

Just tried running eval.py and I am getting:

RuntimeError: Error(s) in loading state_dict for SwinVit12:
        size mismatch for gcn.adj1: copying a param with shape torch.Size([85, 85]) from checkpoint, the shape in current model is torch.Size([15, 15]).
        size mismatch for gcn.pool1.weight: copying a param with shape torch.Size([85, 2720]) from checkpoint, the shape in current model is torch.Size([15, 480]).
        size mismatch for gcn.pool1.bias: copying a param with shape torch.Size([85]) from checkpoint, the shape in current model is torch.Size([15]).
        size mismatch for gcn.pool4.weight: copying a param with shape torch.Size([1, 85]) from checkpoint, the shape in current model is torch.Size([1, 15]).

Seems like something is not configured properly on my end.

How to config use_layers for eff or resnet

Hi guys, great work!
I wanna try eff_b7 or resnet50 as backbone, but seems the code is default to swin.That if simply change backbone to eff_b7 or resnet, the assert inside model defination will raise error:

        assert len(use_layers) == self.num_layers
        assert len(use_selections) == len(use_layers)

I wonder what is the correct use_layers setting for such backbone?

RuntimeError: mat1 and mat2 shapes cannot be multiplied (6144x1456 and 2720x85)

When I select efficentnet for training I get the following error, only swin-transformer does not report it
Can you help me?

Start Training 1 EpochTraceback (most recent call last):
File "D:/hxy/FGVC-PIM-master/main.py", line 301, in
main(args, tlogger)
File "D:/hxy/FGVC-PIM-master/main.py", line 253, in main
train(args, epoch, model, scaler, amp_context, optimizer, schedule, train_loader)
File "D:/hxy/FGVC-PIM-master/main.py", line 140, in train
outs = model(datas)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch_utils.py", line 457, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\parallel\parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\hxy\FGVC-PIM-master\models\pim_module\pim_module.py", line 414, in forward
comb_outs = self.combiner(selects)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "D:\hxy\FGVC-PIM-master\models\pim_module\pim_module.py", line 81, in forward
hs = self.param_pool0(hs)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\mj\anaconda3\envs\pytorch\lib\site-packages\torch\nn\modules\linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (6144x1456 and 2720x85)

How to train 224*224 images

If I want to run 224*224 images, what parameters should I change in config.py when I choose vitb16 or SWIN_W7_224?

Some questions about the eval metrics of this paper.

After reading your paper and code, some questions confused me. I would be very appreciate if someone can explain my question.
First, it was argued swin-T is used in this paper, but actually, in your model definition code, it seems to be swin-L.
Second, your evaluation code calculates top-5 in function _average_top_k_result(eval.py),using all output of modules, such as select, drop, FPN layers, combiner, original output, and cat those output to get the final metrics, is this reasonable?
Thirdly, the highest-5 acc in your code are not equal to any layer, printed as this picture. So, what is the meaning of highest 1-5? And how to get them?
image

demo

hello,can you share the demo script to infer a image,thx

How to use this code when infer?

I checked the code carefully. You write the loss calculation in the model. After initializing the model, the model needs to be sent to the label. There is no code design for infer, and after the end-to-end training, a series of post-processing is used. , including: splicing features, fusion features, etc., and then classify them separately, I want to know if I don't know gt, how to choose the best result? Is this code just written to brush the list? Where is the logic of the actual application?

VIT input size transform 224 to another

Dear author, I saw in your code said "Vit model input can transform 224 to another, we use linear", but I do not know how to use it. I tried to use 384*384 as my input size directly, but it shows me a tensor cannot match issues, so I have to resize my input data, but there are same issues when I detect my test data, so can I know how to use transform 224 to another, or I just need to add a linear layer before I input my datasets?

Thanks

create_feature_extractor problem with VIT

When I use the create_feature_extractor to get the nodes of VIT,I found errors as follows:

Details

Building Model....Traceback (most recent call last):
File "/home/user5/FGVC-PIM/main_PAC.py", line 350, in
main(args, tlogger)
File "/home/user5/FGVC-PIM/main_PAC.py", line 260, in main
train_loader, val_loader, model, optimizer, schedule, scaler, amp_context, start_epoch = set_environment(args, tlogger)
File "/home/user5/FGVC-PIM/main_PAC.py", line 82, in set_environment
use_combiner = args.use_combiner,
File "/home/user5/FGVC-PIM/models/builder.py", line 222, in build_vit16
comb_proj_size = comb_proj_size)
File "/home/user5/FGVC-PIM/models/pim_module/pim_module_vit.py", line 439, in init
self.backbone = create_feature_extractor(backbone, return_nodes = return_nodes)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/models/feature_extraction.py", line 441, in create_feature_extractor
graph = tracer.trace(model)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 571, in trace
self.create_node('output', 'output', (self.create_arg(fn(*args)),), {},
File "/home/user5/FGVC-PIM/timm/models/vision_transformer.py", line 339, in forward
x = self.forward_features(x)
File "/home/user5/FGVC-PIM/timm/models/vision_transformer.py", line 330, in forward_features
x = self.blocks(x)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 560, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/models/feature_extraction.py", line 79, in call_module
out = forward(*args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 556, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/container.py", line 139, in forward
input = module(input)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 560, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/models/feature_extraction.py", line 79, in call_module
out = forward(*args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 556, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user5/FGVC-PIM/timm/models/vision_transformer.py", line 206, in forward
x = x + self.drop_path(self.attn(self.norm1(x)))
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 560, in module_call_wrapper
return self.call_module(mod, forward, args, kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torchvision/models/feature_extraction.py", line 79, in call_module
out = forward(*args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/symbolic_trace.py", line 556, in forward
return _orig_module_call(mod, *args, **kwargs)
File "/home/user5/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/user5/FGVC-PIM/timm/models/vision_transformer.py", line 182, in forward
attn = (q @ k.transpose(-2, -1)) * self.scale
TypeError: unsupported operand type(s) for @: 'Proxy' and 'Proxy'

How to solve this problem?

Hi, bro, your code is very beautiful, but I made the following error when using swin-t to run the code. How can I use the pre training weight downloaded by myself instead of downloading at run time.
Traceback (most recent call last):
File "train.py", line 396, in
train_loader, test_loader, model, optimizer, schedule = set_environment(args)
File "train.py", line 94, in set_environment
model = SwinVit12(
File "/home/pengtl/jackhu/FGVC-PIM-master/models/SwinVit12.py", line 202, in init
self.extractor = timm.create_model('swin_large_patch4_window12_384_in22k', pretrained=True)
File "/home/pengtl/jackhu/FGVC-PIM-master/timm/models/factory.py", line 81, in create_model
model = create_fn(pretrained=pretrained, **kwargs)
File "/home/pengtl/jackhu/FGVC-PIM-master/timm/models/swin_transformer.py", line 654, in swin_large_patch4_window12_384_in22k
model = _create_swin_transformer('swin_large_patch4_window12_384_in22k', pretrained=pretrained, **model_kwargs)
File "/home/pengtl/jackhu/FGVC-PIM-master/timm/models/swin_transformer.py", line 562, in _create_swin_transformer
model = build_model_with_cfg(
File "/home/pengtl/jackhu/FGVC-PIM-master/timm/models/helpers.py", line 457, in build_model_with_cfg
load_pretrained(
File "/home/pengtl/jackhu/FGVC-PIM-master/timm/models/helpers.py", line 184, in load_pretrained
state_dict = load_state_dict_from_url(pretrained_url, progress=progress, map_location='cpu')
File "/home/pengtl/anaconda3/lib/python3.8/site-packages/torch/hub.py", line 528, in load_state_dict_from_url
return torch.load(cached_file, map_location=map_location)
File "/home/pengtl/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/pengtl/anaconda3/lib/python3.8/site-packages/torch/serialization.py", line 762, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.

How to solve the maximum recursion depth error?

This is a great job, but I ran on my own dataset and encountered the following errors.

Start Training 3 Epoch..0%..10%..20%
Start Evaluating 1 Epoch
Start Training 2 Epoch
Start Training 3 Epoch
Traceback (most recent call last):
File "main.py", line 297, in
main(args, tlogger)
File "main.py", line 249, in main
train(args, epoch, model, scaler, amp_context, optimizer, schedule, train_loader)
File "main.py", line 136, in train
outs = model(datas)
File "/DATA/sgwei/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/DATA/sgwei/code/Fine_grained_PIM/models/pim_module/pim_module.py", line 404, in forward
x = self.forward_backbone(x)
File "/DATA/sgwei/code/Fine_grained_PIM/models/pim_module/pim_module.py", line 383, in forward_backbone
return self.backbone(x)
File "/DATA/sgwei/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/fx/graph_module.py", line 616, in wrapped_call
raise e.with_traceback(None)
RecursionError: maximum recursion depth exceeded while calling a Python object

and this is my configs

batch_size
16
c
"./configs/classification_vit.yaml"
data_size
448
device
"cuda:0"
eval_freq
10
exp_name
"T3000"
fpn_size
512
lambda_b
0.5
lambda_c
1
lambda_n
5
lambda_s
0
log_freq
100
max_epochs
20
max_lr
0.0005
model_name
"vit"
num_classes
5
num_selects
layer1
512
layer2
256
layer3
128
layer4
64
num_workers
8
optimizer
"SGD"
pretrained
desc
null
value
null
project_name
"pim_classificationpip"
save_dir
"./records/pim_classificationpip/T3000/"
train_root
"/DATA/sgwei/Datasets/DataBase_v2/train/"
update_freq
2
use_amp
true
use_combiner
true
use_fpn
true
use_selection
true
use_wandb
true
val_root
"/DATA/sgwei/Datasets/MultiLesionClassify_DataBase_v2/val/"
wandb_entity
"sgwei"
warmup_batchs
800
wdecay
0.0005

What are the parameters of eval?

We have a course to reproduce your code, but the parameter of eval is not found. I set train_root to "~" according to the md document, but this caused an error in the setenvironment function in main.py.
WechatIMG773

Swin-T and Resolution

Hi, Thanks for your excellent work, i have a question, i just find the pre-training model Swin_t with pre-training on i1k and resolution 224, can you provide the link to download the pre-training model of swin_t in the paper?

Multiple GPUs Available?

It seems the codes only support single GPU trainning.
Is it possible to train on multiple GPUs?
thx

Regarding use of test set for model selection

In the code, you report the best accuracy on the test set over all epochs. But generally, the validation set should be used for model selection and not the test set. I am aware that the validation set is not available for CUB-200-2011. Is this the standard practice for this dataset? There are other works that use a subset of the training set for validation. In that case, the comparison may not be fair.

How to run the code?

I want to run the train.py to my datasets , but it is error
RuntimeError: "max_cpu" not implemented for 'Half'

How to train on CUB

Thanks for your great job! I have browsed your code and I found that the data is read in through the ImageDataset class, but it does not seem to fit the original format of the CUB dataset. Did you change the format of the original CUB dataset when you performed your experiments, and if so, can you please tell me how you did it?
1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.