Code Monkey home page Code Monkey logo

convnext's Introduction

Official PyTorch implementation of ConvNeXt, from the following paper:

A ConvNet for the 2020s. CVPR 2022.
Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell and Saining Xie
Facebook AI Research, UC Berkeley
[arXiv][video]


We propose ConvNeXt, a pure ConvNet model constructed entirely from standard ConvNet modules. ConvNeXt is accurate, efficient, scalable and very simple in design.

Catalog

  • ImageNet-1K Training Code
  • ImageNet-22K Pre-training Code
  • ImageNet-1K Fine-tuning Code
  • Downstream Transfer (Detection, Segmentation) Code
  • Image Classification [Colab] and Web Demo Hugging Face Spaces
  • Fine-tune on CIFAR with Weights & Biases logging [Colab]

Results and Pre-trained Models

ImageNet-1K trained models

name resolution acc@1 #params FLOPs model
ConvNeXt-T 224x224 82.1 28M 4.5G model
ConvNeXt-S 224x224 83.1 50M 8.7G model
ConvNeXt-B 224x224 83.8 89M 15.4G model
ConvNeXt-B 384x384 85.1 89M 45.0G model
ConvNeXt-L 224x224 84.3 198M 34.4G model
ConvNeXt-L 384x384 85.5 198M 101.0G model

ImageNet-22K trained models

name resolution acc@1 #params FLOPs 22k model 1k model
ConvNeXt-T 224x224 82.9 29M 4.5G model model
ConvNeXt-T 384x384 84.1 29M 13.1G - model
ConvNeXt-S 224x224 84.6 50M 8.7G model model
ConvNeXt-S 384x384 85.8 50M 25.5G - model
ConvNeXt-B 224x224 85.8 89M 15.4G model model
ConvNeXt-B 384x384 86.8 89M 47.0G - model
ConvNeXt-L 224x224 86.6 198M 34.4G model model
ConvNeXt-L 384x384 87.5 198M 101.0G - model
ConvNeXt-XL 224x224 87.0 350M 60.9G model model
ConvNeXt-XL 384x384 87.8 350M 179.0G - model

ImageNet-1K trained models (isotropic)

name resolution acc@1 #params FLOPs model
ConvNeXt-S 224x224 78.7 22M 4.3G model
ConvNeXt-B 224x224 82.0 87M 16.9G model
ConvNeXt-L 224x224 82.6 306M 59.7G model

Installation

Please check INSTALL.md for installation instructions.

Evaluation

We give an example evaluation command for a ImageNet-22K pre-trained, then ImageNet-1K fine-tuned ConvNeXt-B:

Single-GPU

python main.py --model convnext_base --eval true \
--resume https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_1k_224.pth \
--input_size 224 --drop_path 0.2 \
--data_path /path/to/imagenet-1k

Multi-GPU

python -m torch.distributed.launch --nproc_per_node=8 main.py \
--model convnext_base --eval true \
--resume https://dl.fbaipublicfiles.com/convnext/convnext_base_22k_1k_224.pth \
--input_size 224 --drop_path 0.2 \
--data_path /path/to/imagenet-1k

This should give

* Acc@1 85.820 Acc@5 97.868 loss 0.563
  • For evaluating other model variants, change --model, --resume, --input_size accordingly. You can get the url to pre-trained models from the tables above.
  • Setting model-specific --drop_path is not strictly required in evaluation, as the DropPath module in timm behaves the same during evaluation; but it is required in training. See TRAINING.md or our paper for the values used for different models.

Training

See TRAINING.md for training and fine-tuning instructions.

Acknowledgement

This repository is built using the timm library, DeiT and BEiT repositories.

License

This project is released under the MIT license. Please see the LICENSE file for more information.

Citation

If you find this repository helpful, please consider citing:

@Article{liu2022convnet,
  author  = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
  title   = {A ConvNet for the 2020s},
  journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year    = {2022},
}

convnext's People

Contributors

ak391 avatar anonymouscommitter avatar ayulockin avatar datumbox avatar ericmintun avatar hannamao avatar liuzhuang13 avatar s9xie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convnext's Issues

Question 1x1 conv vs linear

Congratulations on your work and thanks for sharing! I'd like to naively ask, what is the reason behind implementing 1x1 convs with fully connected layers? I know they are equivalent but I had been thinking the latter is less efficient.

Thanks in advance!

Hello I have an initial problem

Traceback (most recent call last):
File "C:/Users/janice/Desktop/covnet/ConvNeXt-main/main.py", line 477, in
main(args)
File "C:/Users/janice/Desktop/covnet/ConvNeXt-main/main.py", line 205, in main
utils.init_distributed_mode(args)
File "C:\Users\janice\Desktop\covnet\ConvNeXt-main\utils.py", line 329, in init_distributed_mode
torch.distributed.init_process_group(backend=args.dist_backend, init_method=args.dist_url,
File "C:\Users\janice\anaconda3\envs\covnet\lib\site-packages\torch\distributed\distributed_c10d.py", line 503, in init_process_group
_update_default_pg(_new_process_group_helper(
File "C:\Users\janice\anaconda3\envs\covnet\lib\site-packages\torch\distributed\distributed_c10d.py", line 597, in _new_process_group_helper
raise RuntimeError("Distributed package doesn't have NCCL "
RuntimeError: Distributed package doesn't have NCCL built in
Killing subprocess 14712
Traceback (most recent call last):
File "C:\Users\janice\anaconda3\envs\covnet\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\janice\anaconda3\envs\covnet\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\janice\anaconda3\envs\covnet\lib\site-packages\torch\distributed\launch.py", line 340, in
main()
File "C:\Users\janice\anaconda3\envs\covnet\lib\site-packages\torch\distributed\launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "C:\Users\janice\anaconda3\envs\covnet\lib\site-packages\torch\distributed\launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['C:\Users\janice\anaconda3\envs\covnet\python.exe', '-u', 'C:/Users/janice/Desktop/covnet/ConvNeXt-main/main
.py', '--local_rank=0', '--model', 'convnext_tiny', '--drop_path', '0.1', '--batch_size', '128', '--lr', '4e-3', '--update_freq', '4', '--model_ema', 'true
', '--model_ema_eval', 'true', '--data_path', 'C:/Users/janice/Desktop/covnet/train_val_test/training', '--output_dir', 'C:/Users/janice/Desktop/covnet/res
ults']' returned non-zero exit status 1.

I have no idea about this error

about mmcv_ custom

Where should I put convnext/semantic_ segmentation/mmcv_ custom?please

how to increase the reproducibility ?

run main.py to train, however the accuracy is random and the interval is above 0.5%. Would you like tell me how to improve reproducibility and reduce the accuracy interval?

I know there are some code in main.py to improve the reproducibility:
# fix the seed for reproducibility
seed = args.seed + utils.get_rank()
torch.manual_seed(seed)
np.random.seed(seed)
cudnn.benchmark = True

expected results on CIFAR

Can you please share the expected results or training regime for CIFAR100 dataset (if exist)?
Using the same regime posted for conext-tiny on ImageNet yields test_acc1 of 82.1 on CIFAR100. It seems quite low compared to Imagenet results.

specifically I ran the script the following:

python -m torch.distributed.launch --nproc_per_node=8 main.py
--model $ARCH --drop_path 0.1
--batch_size 64 --lr 4e-3 --update_freq 8
--model_ema true --model_ema_eval true
--data_set CIFAR
--data_path $CIFAR_DIR
--output_dir $ROOT_RES
--seed $SEED

Training Log

Really thanks for sharing your fantastic work! We are trying to reproduce it, can you release the training log ? Thanks in advance.

Evaluation in own classes

I understand that convNeXt with the 22k imagenet is aimed to be a zero-shot algorithm. Is there any way to evaluate the model only on the classes I am interested to?

hello,what is this problem,thanks~

Traceback (most recent call last):
File "run_with_submitit.py", line 122, in
main()
File "run_with_submitit.py", line 113, in main
args.dist_url = get_init_file().as_uri()
File "run_with_submitit.py", line 41, in get_init_file
os.makedirs(str(get_shared_folder()), exist_ok=True)
File "run_with_submitit.py", line 37, in get_shared_folder
raise RuntimeError("No shared folder available")
RuntimeError: No shared folder available

some files don't exist.

Hi,

i found some files not existing in master branch, such as,

"from .checkpoint import load_checkpoint"

Results on ImageNet-C

Hi,

Thanks for the amazing work. I have loaded the pretrained model and eval the results on ImageNet-C. However, I cannot reproduce the same mCE as report in the paper.

Can you please upload the ImageNet-C eval code also? or give some descriptions on the mCE measurement and calculation?

Thanks.

Best regards,
Zhou

There is no such directory 'tools'

At object_detection README

# single-gpu training
python tools/train.py <CONFIG_FILE> --cfg-options model.pretrained=<PRETRAIN_MODEL> [other optional arguments]

It suggests to use train.py under tools, but I cannot find tools directory anywhere.

convnext is slower than swin ?

I use convnext_tiny as pretrained model to finetune on my own dataset, but i found convnext_tiny is slower than swin_tiny, I use 4 nvidia 1080ti. convnext tiny cost three more times than swin_tiny . But the flops of two model is similar, I don't know why convnext is slower?

Question: Small initial weights

Hi,

Thank you for sharing the code.
I saw you are using trunc_normal_(m.weight, std=.02) in the Block module.

I think it makes the weights are too small and hard to train the model.
And the values are may smaller after multiplied with the gamma

Is there any reason? Why you didn't keep the default initialization?

KeyError: 'DistOptimizerHook is not in the hook registry'

Hello, when I was running the target detection code, the following problems occurred, which should be caused by the missing part of the files in mmdet / Models / detectors. Can you release the perfect code?

Traceback (most recent call last):
File "tools/train.py", line 189, in
cfg.get('momentum_config', None))
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 540, in register_training_hooks
main()
File "tools/train.py", line 185, in main
self.register_optimizer_hook(optimizer_config)
File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 448, in register_optimizer_hook
meta=meta)
File "/root/data/UniverseNet-master/mmdet/apis/train.py", line 166, in train_detector
hook = mmcv.build_from_cfg(optimizer_config, HOOKS)
File "/opt/conda/lib/python3.7/site-packages/mmcv/utils/registry.py", line 45, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')cfg.get('momentum_config', None))

File "/opt/conda/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 540, in register_training_hooks
KeyError: 'DistOptimizerHook is not in the hook registry'

FP16 settings

Hi, thanks your work.
In semantic segmentation, about your config-base, large, xlarge, how not to use fp16? There is a bug if you set fp16=false in config directly.

pretrained checkpoint loading bug

Hi,

I found a bug in pretrained checkpoint loading.

in mmdetection/tools/train.py line: 159
"model.init_weights()"

in convnext.py line:115 "def init_weights(self, pretrained=None): ", we need set param "pretrained",
but nowhere i can see this param setting location.

ConvNeXt-S and ConvNeXt-T models pretrained on ImageNet-22k

Hi, this is certainly great work, and thank you for the contribution!

Models pretrained on ImageNet-22k outperform the ones without significantly. Wonder whether it's possible for you guys to release the ConvNeXt-S&T models that have been pretrained on ImageNet-22k? If possible, it would motivate lots of on-device downstream tasks to integrate ConvNeXt as the backbone. I've tried the current ImageNet-1k trained models with panoptic segmentation and it works like a charm. ConvNext-S&T pretrained with ImageNet-22k will be greatly appreciated. Thanks!

Will the ImageNet 22k pretrained models of 384x384 be released?

Hi,

Thanks for the excellent work that presents the best backbone for vision l believe. I like the detailed analysis and discussions in the paper as well. I wonder if the 384x384 models of ConvNeXt-L and ConvNeXt-XL that pretrained on ImageNet22k be made available?

Question: Can you provide a smaller pretrained model?

Thank you for your excellent work!
In many cases, we need a smaller model size for semantic segmentation tasks, such as HRNetW18SmallV2. Can you provide a smaller pretrained model, like dims=[18, 36, 72, 144]?

Meet unexcepted key when creating convnext_isotropic_large pretrained model

Hi~, I've try to load the pretrained weight of convnext_isotropic_large model myself, but I met some unexcepted keys, the code is like

model = convnext_isotropic_large(pretrained=True)

And meet RunTimeError:

 Unexpected key(s) in state_dict: "blocks.0.gamma", "blocks.1.gamma", "blocks.2.gamma", "blocks.3.gamma", "blocks.4.gamma", "blocks.5.gamma", "blocks.6.gamma", "blocks.7.gamma", "blocks.8.gamma", "blocks.9.gamma", "blocks.10.gamma", "blocks.11.gamma", "blocks.12.gamma", "blocks.13.gamma", "blocks.14.gamma", "blocks.15.gamma", "blocks.16.gamma", "blocks.17.gamma", "blocks.18.gamma", "blocks.19.gamma", "blocks.20.gamma", "blocks.21.gamma", "blocks.22.gamma", "blocks.23.gamma", "blocks.24.gamma", "blocks.25.gamma", "blocks.26.gamma", "blocks.27.gamma", "blocks.28.gamma", "blocks.29.gamma", "blocks.30.gamma", "blocks.31.gamma", "blocks.32.gamma", "blocks.33.gamma", "blocks.34.gamma", "blocks.35.gamma". 

I think there might be something wrong about the pretrained weight here.

Default image input size pre-trained COCO models?

What is the image input size of the pre-trained coco models? Is it correct that this is 1280 x 800 as stated in the paper in the caption of Table 3?

To improve the GFLOPS, I prefer to use an input size of 640 x 640 when finetuning on my custom dataset. How does this affect the model's detection performance?

Training accuracy on test set is not improving (tested on two different datasets)

Thanks for sharing this code.

I tried to run training script on my custom dataset (~50k samples) and somehow I could not get improvement on classification accuracy even after 300 epochs.

!python main.py --model convnext_tiny --drop_path 0.1 --data-path "/my/custom/dataset" --data_set "customDS" --nb_classes 2 --batch_size 256 --lr 4e-2 --update_freq 4 --model_ema true --model_ema_eval true

Epoch: [0] Total time: 0:00:41 (1.1597 s / it)
Averaged stats: lr: 0.003596 min_lr: 0.003596 loss: 2.2043 (2.8601) class_acc: 0.5234 (0.5078) weight_decay: 0.0500 (0.0500)
Accuracy of the model on the 2297 test images: 44.4%
Max accuracy: 44.36%

Epoch: [299] Total time: 0:00:22 (0.6137 s / it)
Averaged stats: lr: 0.000010 min_lr: 0.000010 loss: 0.6883 (0.6902) class_acc: 0.5352 (0.5386) weight_decay: 0.0500 (0.0500)
Accuracy of the model on the 2297 test images: 55.6%
Max accuracy: 55.86%

I trained the same custom dataset using ResNet101 and it achieved 90% classification accuracy on the same test set.

I also repeat training on Imagenette dataset (a subset of ImageNet dataset with 10 easy classes) and the same thing happened. No substantial decrease in training loss and almost no improvement on the classification accuracy.
https://github.com/fastai/imagenette

!python -m torch.distributed.launch --nproc_per_node=2 main.py --model convnext_tiny --drop_path 0.1 --batch_size 64 --lr 4e-3 --update_freq 4 --model_ema true --model_ema_eval true --data_path "/imagenette/with/10/classes" --data_set "IMNET" --output_dir "/my/output/path" --nb_classes 10

Epoch: [284] Total time: 0:00:34 (0.3696 s / it)
Averaged stats: lr: 0.000029 min_lr: 0.000029 loss: 2.0305 (2.0208) weight_decay: 0.0500 (0.0500)
Accuracy of the model on the 1100 test images: 1.2%
Max accuracy: 9.91%

How can I improve the training performance? Did you test the training parameters on smaller size datasets?

Thanks!

Question: Why no activation function in the Stem layer?

Hi,

I was curious why you don't use an activation function in the stem layer:

stem = nn.Sequential(
            nn.Conv2d(in_chans, dims[0], kernel_size=4, stride=4),
            LayerNorm(dims[0], eps=1e-6, data_format="channels_first")
        )

I think the standard res net uses an activation function in their stem layer (before the maxpool).
I couldn't find in the paper when you changed that.
Is there some intuition why this design choice makes sense?

Only object detection

Can I run mask rcnn or cascade mask rcnn ConvNeXt training with a custom dataset that has no segmentation images, only markup with object detection? I am getting an error related to masks on startup even when setting the with_mask=False flag in the config files -> "KeyError: 'qt_masks' ".

The results of ADE20K

We found a problem that in the ADE20k dataset test, BEiT defaults to the 'whole' test, but the 'slide' test standby is here, so there should be test_cfg = dict(mode='slide', crop_size=crop_size, stride=(341, 341)) in the config.

custom dataset

Hello, when I use a custom dataset, since there is no mask label in the dataset, how should I modify the code?

The batch-sizes of single machine commands are not adjusted

On the training doc, I believe we need to adjust the batch-size (or the LR) on the single machine commands to maintain the total batch-size the same.

For example, currently the ConvNeXt-S reports:

  • Multi-node: --nodes 4 --ngpus 8 --batch_size 128 --lr 4e-3
  • Single-machine: --nproc_per_node=8 --batch_size 128 --lr 4e-3 <- I believe here it should be --batch_size 512

Same applies for the other variants.

Question about layer norm.

Very interesting work, I read the code and paper, and have a question about the layer normalization in ConvNeXt. Previously, when we use CNN, the Layer norm usually calculate mean and standard deviation of [C, H, W]. (like the description in GN's paper and Pytorch's documentation) But in Transformer structures, such as DETR, they calculate mean and variance for only channel dimension. Did you compare the layer norm between [C] and [C, H, W] in ConvNeXt? Was the layer norm in convNext carefully designed, or just imitate the Transformer architecture?

Custom Data

Do you will be release anything about for train the model for custom datas?

how to convert the saved model to inference model?

The size of training saved model of convnext_base is 1.4G, how to convert it to 89M as the repo. reported? I try to convert it using torch.save(model.state_dict(), 'convnext-b-224-model-best.pth') but get 335M.

[Feature Request] quantization code for ConvNeXt

Actually, I tried to use torch.fx to quant ConvNeXt to see the performance after quantization. But I get this error: TypeError: dequantize() takes no arguments (1 given). Could you please help?

import torch
from torch.quantization import quantize_fx
torch.backends.cudnn.enabled = False
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

device_cpu = torch.device("cpu")

save_ckpt_path = 'cn-fx.pt'
from models.convnext import convnext_tiny
model = convnext_tiny(False)
model.eval() # essential
model.to(device_cpu)
inp_size = (224,224)

res = model(torch.randn((1,3,inp_size[0],inp_size[1])).to(device_cpu))
print(res.shape)
graph_module = torch.fx.symbolic_trace(model)

qconfig_dict = {'': torch.quantization.get_default_qat_qconfig('qnnpack')}
# qconfig_dict = {'': torch.quantization.get_default_qat_qconfig('fbgemm')}
mp = quantize_fx.prepare_fx(graph_module, qconfig_dict)


def eval_fn(model, device=device_cpu):
    with torch.no_grad():
        model.to(device)
        for i in range(20):
            ims = torch.rand((1,3,224,224)).to(device)
            output = model(ims)

eval_fn(mp, device=device_cpu)
mq = quantize_fx.convert_fx(mp)

dummy_input = torch.rand(1, 3, inp_size[0], inp_size[1])
torchscript_model = torch.jit.trace(mq, dummy_input)
from torch.utils.mobile_optimizer import optimize_for_mobile
torchscript_model = optimize_for_mobile(torchscript_model)
torch.jit.save(torchscript_model, save_ckpt_path)
torchscript_model._save_for_lite_interpreter(save_ckpt_path.replace('.pt', '.ptl'))

about order of conv and layer norm in downsample_layers

Hi, thanks for sharing great work!

I have a question about order of conv and layer norm in downsample_layers.
In stem there are conv first and then layer norm, but in downsample_layer it seems layernorm first.

LayerNorm(dims[i], eps=1e-6, data_format="channels_first"),

It means in downsample_layers there are layers as follows;

  • conv2d
  • layernorm
  • layernorm
  • conv2d
  • layernorm
  • conv2d
  • layernorm
  • conv2d

Is it intentional design, or just am I misunderstanding something?

about apex

My graphics card is 3070ti, cuda11.3,torch1. 10.1. But I can't compile apex.Do you know how to solve it

The error is as follows:

ERROR: Command errored out with exit status 1: /home/aini/software/anaconda3/envs/cu113_py37_tr110_mm/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-b3ie2ccd/setup.py'"'"'; file='"'"'/tmp/pip-req-build-b3ie2ccd/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-t1h4adu5/install-record.txt --single-version-externally-managed --compile --install-headers /home/aini/software/anaconda3/envs/cu113_py37_tr110_mm/include/python3.7m/apex Check the logs for full command output.

Detectron2 configs for ConvNeXt

Hello,
Thanks for releasing the code for ConvNeXt. It helps us big time.
I am currently working on a object detection project with Detectron2 as the framework.

Is there any possibility of configs being released for Detectron2?.

werid learning rate occur

HI,

i trained convnext model in my project, but i found werid lr happened.

i set lr = 0.002, but in the training loop, the lr param seems to be only 0.00002.

someone knowns why?

convnext_1
3661642070433_ pic

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.