Code Monkey home page Code Monkey logo

vt-unet's Introduction

VT-UNet

This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet.

VT-UNet Architecture

Our previous Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation can be found iside version 1 folder.

VT-UNet: A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation

Parts of codes are borrowed from nn-UNet.

System requirements

This software was originally designed and run on a system running Ubuntu.

Dataset Preparation

  • Create a folder under VTUNet as DATASET
  • Download MSD BraTS dataset (http://medicaldecathlon.com/) and put it under DATASET/vtunet_raw/vtunet_raw_data
  • Rename folder as Task03_tumor
  • Move dataset.json file to Task03_tumor

Pre-trained weights

Create Environment variables

vi ~/.bashrc

  • export vtunet_raw_data_base="/home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data"
  • export vtunet_preprocessed="/home/VTUNet/DATASET/vtunet_preprocessed"
  • export RESULTS_FOLDER_VTUNET="/home/VTUNet/DATASET/vtunet_trained_models"

source ~/.bashrc

Environment setup

Create a virtual environment

  • virtualenv -p /usr/bin/python3.8 venv
  • source venv/bin/activate

Install torch

Install other dependencies

  • pip install -r requirements.txt

Preprocess Data

cd VTUNet

pip install -e .

  • vtunet_convert_decathlon_task -i /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/Task03_tumor
  • vtunet_plan_and_preprocess -t 3

Train Model

cd vtunet

  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor 3 0 &> small.out &
  • CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor_base 3 0 &> base.out &

Test Model

cd /home/VTUNet/DATASET/vtunet_raw/vtunet_raw_data/vtunet_raw_data/Task003_tumor/

  • CUDA_VISIBLE_DEVICES=0 vtunet_predict -i imagesTs -o inferTs/vtunet_tumor -m 3d_fullres -t 3 -f 0 -chk model_best -tr vtunetTrainerV2_vtunet_tumor
  • python vtunet/inference_tumor.py vtunet_tumor

Trained model Weights

  • VT-UNet-S - (fold 0 only)
  • VT-UNet-B (To be updated)

Acknowledgements

This repository makes liberal use of code from open_brats2020, Swin Transformer, Video Swin Transformer, Swin-Unet, nnUNet and nnFormer

References

Citing VT-UNet

    @inproceedings{peiris2022robust,
      title={A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation},
      author={Peiris, Himashi and Hayat, Munawar and Chen, Zhaolin and Egan, Gary and Harandi, Mehrtash},
      booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
      pages={162--172},
      year={2022},
      organization={Springer}
    }

vt-unet's People

Contributors

himashi92 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vt-unet's Issues

CUDA out of memory

Hi, I'm now getting CUDA OOM error while training with small configuration:
I saw other similar issues but not sure how this could be resolved in my situation. I'm using AWS EC2 p3.2xlarge instance (61GB RAM, 16GB GPU memory). Is "tiny" configuration still available? Then how can I use that configuration? Thanks.

  File "/home/VTUNet/vtunet/network_architecture/vtunet_tumor.py", line 359, in forward
    x, x2, v, k, q = self.forward_part1(x, mask_matrix, prev_v, prev_k, prev_q, is_decoder)
  File "/home/VTUNet/vtunet/network_architecture/vtunet_tumor.py", line 307, in forward_part1
    attn_windows, cross_attn_windows, v, k, q = self.attn(x_windows, mask=attn_mask, prev_v=prev_v, prev_k=prev_k,
  File "/opt/conda/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/VTUNet/vtunet/network_architecture/vtunet_tumor.py", line 197, in forward
    x2 = (attn2 @ prev_v).transpose(1, 2).reshape(B_, N, C)
RuntimeError: CUDA out of memory. Tried to allocate 216.00 MiB (GPU 0; 15.77 GiB total capacity; 13.91 GiB already allocated; 166.12 MiB free; 14.02 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Exception in thread Thread-5:
...

Training & Testing On MSD Data

First of all, thanks @himashi92 for this amazing work.
Since we do not have access to BraTS data and want to run your model on the MSD data set given here, it would be very helpful if you could help me with that.
What will be the tree structure of the code before running & what changes in the code need to be done?
It would be further more helpful if could add your code of running it on MSD data here in this repo.

task 4 issue

For the hippocampus task, I am getting this error:

x = x.view(B, D * 8, H, W, C) RuntimeError: shape '[9, 32, 4, 4, 1152]' is invalid for input of size 414720. I set the number of channels to 1 and preprocessed the data. Any suggestions on how to resolve this?

Version2

Hi,

Nice work. Previously you said in README that this new version will perform better than version1. Could you briefly explain whether and what you have changed in the network structure to make it better?

Thank you.

size

Hello, I noticed that your code cut the data to 128128128 during training, but it was not cut during verification and testing. Can you cut the size to 128128128 during verification and testing

RuntimeError: CUDA out of memory

Traceback (most recent call last):
  File "train.py", line 314, in <module>
    main(arguments)
  File "train.py", line 166, in main
    segs_S1 = model_1(inputs_S1)
  File "/home/yusongli/.bin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vision_transformer.py", line 49, in forward
    return self.swin_unet(x)
  File "/home/yusongli/.bin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 1118, in forward
    x, x_downsample, v_values_1, k_values_1, q_values_1, v_values_2, k_values_2, q_values_2 = self.forward_features(
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 960, in forward_features
    x, v1, k1, q1, v2, k2, q2 = layer(x, i)
  File "/home/yusongli/.bin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 726, in forward
    x, v1, k1, q1 = blk(x, attn_mask, None, None, None)
  File "/home/yusongli/.bin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 392, in forward
    x, x2, v, k, q = self.forward_part1(x, mask_matrix, prev_v, prev_k, prev_q, is_decoder)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 340, in forward_part1
    attn_windows, cross_attn_windows, v, k, q = self.attn(x_windows, mask=attn_mask, prev_v=prev_v, prev_k=prev_k,
  File "/home/yusongli/.bin/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/yusongli/_project/shidaoai/task/01_seg/VT-Unet/vtunet/vt_unet.py", line 191, in forward
    attn = q @ k.transpose(-2, -1)
RuntimeError: CUDA out of memory. Tried to allocate 9.02 GiB (GPU 0; 23.70 GiB total capacity; 11.37 GiB already allocated; 8.42 GiB free; 13.10 GiB reserved in total by PyTorch) If reserved memor
y is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Notice: I modified the dataloader to fit my own dataset. I've serched lots of solution but failed to solve this. Very sad :-(
Could you please help me? Maybe the model is too large to train? I don't know. Thanks!

how to predict on Brats Validation case?

First of all, thank you for your excellent work,i want to ask you a question,how to use this cood to predict a case.Brats Validation dataset only have four files,it don't have seg file, i want to use this code to predict on every validation case,looking forward to your reply.

Error

ImportError: cannot import name 'DataChannelSelectionTransform' from 'batchgenerators.transforms' batchgenerators==0.21

memory

Hello, the device around me now has only 12g video memory. Your code has been reporting video memory errors. Can you run it by modifying parameters?

Data processing questions about Brats21

Thank you very much for your excellent work, I noticed that in your Brats.py code, the fusion of different tags into ET, TC, WT also occurs in many data processing examples, such as Monai. But I see that your paper is still classified and evaluated according to the label of the competition, right?
Looking forward to your reply.

Test error

Hello, when I run the test command: py --cfg configs/vt_unet_base.yaml --num_classes 3 Key error: column not found 'sens', found the weight file is also the problem, I would like to ask your solution?

Inference failed due to missing posporcessing.json file

Dear author,
I find your paper very interesting and I'm new in the medical imaging field. I am trying to reproduce your model results for version 1 and version 2.

In version 1, I couldn't find the HD95 code, therefore I copied it from here. It gives the wrong HD95 during evaluation, can you please provide this version 1.

In version 2, when I run the inference code it gives the error shown in the figure. To resolve this issue I run consolidate_postprocessing_simple.py to compute the postprocessing.json file. But this says fold_0, fold_1, fold_2, etc are missing. I trained the model for fold=0 as described in the instructions. Could you please see this?

image

Question about init_weights function in SwinTransformerSys3D class

Hi @himashi92,
First of all, thank you very much for your great project.
I have a question about the init_weights function in class SwinTransformerSys3D

def init_weights(self, pretrained=None):

I am reading the code of this class and I see the init_weights function seems not to be used anywhere when creating model for training or testing in train.py or test.py files.

I hope you will answer my question. Thank you.
Tan Thin Nguyen.

Error when run train.py

Hi,
I am trying to regenerate the result, I got the following error. Any help with this?
Thanks

AssertionError Traceback (most recent call last)
~/Desktop/projects/VT-UNet-main/train.py in
300 arguments = parser.parse_args()
301 os.environ['CUDA_VISIBLE_DEVICES'] = arguments.devices
--> 302 main(arguments)

~/Desktop/projects/VT-UNet-main/train.py in main(args)
109 optimizer = torch.optim.Adam(params, lr=args.lr, weight_decay=args.weight_decay)
110
--> 111 full_train_dataset, l_val_dataset, bench_dataset = get_datasets(args.seed, fold_number=args.fold)
112 train_loader = torch.utils.data.DataLoader(full_train_dataset, batch_size=args.batch_size, shuffle=True,
113 num_workers=args.workers, pin_memory=True, drop_last=True)

~/Desktop/projects/VT-UNet-main/dataset/brats.py in get_datasets(seed, on, fold_number, normalisation)
90 base_folder = pathlib.Path(get_brats_folder(on)).resolve()
91 print(base_folder)
---> 92 assert base_folder.exists()
93 patients_dir = sorted([x for x in base_folder.iterdir() if x.is_dir()])
94

AssertionError:

dataset

Hello, author, through your paper, I found that you divided 1251 pieces of data into training, verification and testing for experiments, rather than taking part in the 21 challenge to obtain 251 verification sets for experiments. Is this a convincing division?

Output channel settings

Hello, I want to train my ribfrac dataset with vt-unet model. This is the rib fracture segmentation dataset. There are five types of rib fracture, so the output channel should be set to 6, right

Data preprocess

Hi @himashi92

Thanks for this awesome work and repo. I am trying to replicate this on the Liver data in the decathlon. When I run the the train command for the base training CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor_base 3 0 &> base.out &

I get the following error below KeyError: 'BRATS_001'():
`stage: 0
{'batch_size': 2, 'num_pool_per_axis': [5, 5, 5], 'patch_size': array([128, 128, 128]), 'median_patient_size_in_voxels': array([195, 207, 207]), 'current_spacing': array([2.473119 , 1.89831205, 1.89831205]), 'original_spacing': array([1. , 0.76757812, 0.76757812]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

stage: 1
{'batch_size': 2, 'num_pool_per_axis': [5, 5, 5], 'patch_size': array([128, 128, 128]), 'median_patient_size_in_voxels': array([482, 512, 512]), 'current_spacing': array([1. , 0.76757812, 0.76757812]), 'original_spacing': array([1. , 0.76757812, 0.76757812]), 'do_dummy_2D_data_aug': False, 'pool_op_kernel_sizes': [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], 'conv_kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]}

I am using stage 1 from these plans
I am using batch dice + CE loss

I am using data from this folder: /home/VT-UNet/VTUNet/DATASET/vtunet_preprocessed/Task003_Liver/vtunetData_plans_v2.1
###############################################
loading dataset
loading all case properties
2022-08-16 04:36:37.549490: Creating new 5-fold cross-validation split...
2022-08-16 04:36:37.550782: Desired fold for training: 0
2022-08-16 04:36:37.550879: This split has 387 training and 73 validation cases.
Traceback (most recent call last):
File "/home/VT-UNet/transegvenv/bin/vtunet_train", line 33, in
sys.exit(load_entry_point('vtunet', 'console_scripts', 'vtunet_train')())
File "/home/VT-UNet/VTUNet/vtunet/run/run_training.py", line 134, in main
trainer.initialize(not validation_only)
File "/home/VT-UNet/VTUNet/vtunet/training/network_training/vtunetTrainerV2_vtunet_liver_base.py", line 90, in initialize
self.dl_tr, self.dl_val = self.get_basic_generators()
File "/home/VT-UNet/VTUNet/vtunet/training/network_training/vtunetTrainer.py", line 401, in get_basic_generators
self.do_split()
File "/home/VT-UNet/VTUNet/vtunet/training/network_training/vtunetTrainerV2_vtunet_liver_base.py", line 410, in do_split
self.dataset_tr[i] = self.dataset[i]
KeyError: 'BRATS_001'`

Any idea how to fix this, also but separately I get an error when I try to do the vtunet_train_3d?

Ask for Code Help

Thank you for your important work, when we study your article, we did not find your segmentation map visualization code and comparison model weight file, so please share this part of the code at your convenience.

RuntimeError when training

Hi, I followed the installation instruction and started training with following command:
CUDA_VISIBLE_DEVICES=0 nohup vtunet_train 3d_fullres vtunetTrainerV2_vtunet_tumor 3 0 &> small.out &

and I get the following error in small.out file:

...
delete:layers_up.1.blocks.1.mlp.fc2.weight;shape pretrain:torch.Size([21841]);shape model:torch.Size([192, 768])
delete:layers_up.1.blocks.1.mlp.fc2.bias;shape pretrain:torch.Size([21841]);shape model:torch.Size([192])
2023-05-14 20:38:25.769134: lr: 0.0001
/opt/conda/envs/myenv/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  ../aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Exception in background worker 3:
 mmap length is greater than file size
using pin_memory on device 0
Traceback (most recent call last):
  File "/opt/conda/envs/myenv/bin/vtunet_train", line 33, in <module>
    sys.exit(load_entry_point('vtunet', 'console_scripts', 'vtunet_train')())
  File "/home/VTUNet/vtunet/run/run_training.py", line 150, in main
    trainer.run_training()
  File "/home/VTUNet/vtunet/training/network_training/vtunetTrainerV2_vtunet_tumor.py", line 515, in run_training
    ret = super().run_training()
  File "/home/VTUNet/vtunet/training/network_training/vtunetTrainer.py", line 319, in run_training
    super(vtunetTrainer, self).run_training()
  File "/home/VTUNet/vtunet/training/network_training/network_trainer.py", line 421, in run_training
    _ = self.tr_gen.next()
  File "/opt/conda/envs/myenv/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 181, in next
    return self.__next__()
  File "/opt/conda/envs/myenv/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 205, in __next__
    item = self.__get_next_item()
  File "/opt/conda/envs/myenv/lib/python3.8/site-packages/batchgenerators/dataloading/multi_threaded_augmenter.py", line 189, in __get_next_item
    raise RuntimeError("MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of "
RuntimeError: MultiThreadedAugmenter.abort_event was set, something went wrong. Maybe one of your workers crashed. This is not the actual error message! Look further up your stdout to see what caused the error. Please also check whether your RAM was full

Any ideas of what could be wrong here?

Training transformers with limited data

Hi @himashi92,
fist of all thank you very much for your great work!

I would have a question regarding how you coped with the issue of having only very limited training data available (MSD liver: 87 train samples). Transformer based architectures like ViT or also detectors like DeTr have shown to only perform well when there is huge amount of labeled data available (DeTr lower bound of 2D images ~15k to train from scratch). So I would think that training a 3D transformer based architecture like VT-UNet would even be more data hungry.

Have you made experiments without the pre-trained Swin-T weights as initialization?
Did you use heavy data augmentation for those small data sets?
From which shape did you crop to 128x128x128?
Is it due to your shifted-window approach, that your architecture performs also very good with limited data / have you experimented with different transformer layers (like ViT)?
So the main question is basically: what are in your opinion the key-factors of the success of your approach when it comes to limited data?

I would be very happy if you could answer my questions.
BR Bastian

SOTA matter about the unet

Hi, this is really a great work, I am currently reproducing this work, but I found that my 3d unet performs better than VT-UNet. So I'm confused which version of the 3dunet model you reproduced. Can you provide a link to the 3dunet code you reproduced in your paper, thank you very much, looking forward to your reply!!!!

dice

Hello, how to evaluate dice after the test?

How to draw this chart?

Hello, your work is very interesting. I want to know how to draw this kind of chart. Thank you very much!
image

Confused about the dim of feature size

i ‘m confused about the dim of the output of the forward_feature. According to the paper, the input of the forward_up_features should be (b, d/4, h/32,w/32,8C) but the output of forward_feature is (b, h/32, h/32,w/32,8C). and i notice the function forward_features_to_token_learner are not used, so is there something wrong with the forward_features_to_token_learner.

Fusion module alpha parameter

Hello, your work is amazing. I have a question about the alpha parameter of the cross attention and self attention fusion module in the decoder. It was 0.5 in version one and the paper, but it became 0.3 in version two. Does this mean that the network pays more attention to the characteristics of the encoder?

Issue with training

Hi! I am trying to train the vtunet but the process is not starting i nthe background with nohup. Any idea why?

DATASET seek

Thank you for your code! For some reasons, I can't download the dataset here. Can I send your dataset to my email. Thank you very much.

VT-UNet-B download issue

Hi, why I can't download the VT-UNet-B in google drive? Could you help and advice how to download the VT-UNet-B.

Visualization part

Hi,

Great work and thanks for sharing the code. Could ask how did you visualize the result? Could you share the scripts?

Best,
Wei

Missing postprocessing.json file

Hi, when I try to test the model, it gives me a warning that the postprocessing.json file is missing. So I tried to run the consolidate folds file, but it is expecting some folder called validation_raw which did not get created during training. Please let me know what I am missing, and can you please include running the consolidate folds file in the readme because it is part of the required steps, it seems.

Confused about the dim of feature size

i ‘m confused about the dim of the output of the forward_feature. According to the paper, the input of the forward_up_features should be (b, d/4, h/32,w/32,8C) but the output of forward_feature is (b, h/32, h/32,w/32,8C). and i notice the function forward_features_to_token_learner are not used, so is there something wrong with the forward_features_to_token_learner.

Retrain results do not match the paper result.

Thank you very much for the very good code. But some parts still confuse me. We have retrained the new model following the steps on Readme, but either VT-UNET-S or VT-UNET-B is somewhat different from Table 1 in the paper (e.g. VT-UNET-B, retrained: Dice_et0.80674840333256,Dice_ tc0.8525805936607558, Dice_wt0.9158492648036342
hd_et4.286296144927589,hd_tc4.707234057781734,hd_wt3.6658415312320956 dice0.8583927539323167
HD4.219790577980473 ). We speculate whether this is caused by different test sets. We are dividing the test set according to VT-UNet/VTUNet/dataset.json, is it correct? If it is convenient, could you please provide the VT-UNET-B model that has been trained.

About Results on BraTS

The paper <A Volumetric Transformer for Accurate 3D Tumor Segmentation> evaluates the VT-UNet's performance on BraTS 2021 dataset, but the MICCAI paper <A Robust Volumetric Transformer for Accurate 3D Tumor Segmentation> evaluates the VT-UNet on MSD BraTS task. Why not evaluate and report the performance of VT-UNet on BraTS 2021 any more in the MICCAI paper?

Ask for data help

Hello, I'd like to ask if you can provide the treatment of liver or pancreatic tumors. The code shows the treatment of brats2021 data. I need to deal with the segmentation of gastric cancer data. The data type is DCM. Thank you for sharing.

Testing

Hi @himashi92

For testing, how does one preprocess the data to make it ready for inference. The initial preprocessing only preprocesses the train and validation data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.