Code Monkey home page Code Monkey logo

pose-transfer's Introduction

Pose-Transfer

Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19(Oral). The paper is available here.

Video generation with a single image as input. More details can be found in the supplementary materials in our paper.

News

  • We have released a new branch PATN_Fine. We introduce a segment-based skip-connection and a novel segment-based style loss, achieving even better results on DeepFashion.
  • Video demo is available now. We further improve the performance of our model by introducing a segment-based skip-connection. We will release the code soon. Refer to our supplementary materials for more details.
  • Codes for pytorch 1.0 is available now under the branch pytorch_v1.0. The same results on both datasets can be reproduced with the pretrained model.

Notes:

In pytorch 1.0, running_mean and running_var are not saved for the Instance Normalization layer by default. To reproduce our result in the paper, launch python tool/rm_insnorm_running_vars.py to remove corresponding keys in the pretrained model. (Only for the DeepFashion dataset.)

This is Pytorch implementation for pose transfer on both Market1501 and DeepFashion dataset. The code is written by Tengteng Huang and Zhen Zhu.

Requirement

  • pytorch(0.3.1)
  • torchvision(0.2.0)
  • numpy
  • scipy
  • scikit-image
  • pillow
  • pandas
  • tqdm
  • dominate

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/tengteng95/Pose-Transfer.git
cd Pose-Transfer

Data Preperation

We provide our dataset split files and extracted keypoints files for convience.

Market1501

  • Download the Market-1501 dataset from here. Rename bounding_box_train and bounding_box_test to train and test, and put them under the market_data directory.
  • Download train/test splits and train/test key points annotations from Google Drive or Baidu Disk, including market-pairs-train.csv, market-pairs-test.csv, market-annotation-train.csv, market-annotation-train.csv. Put these four files under the market_data directory.
  • Generate the pose heatmaps. Launch
python tool/generate_pose_map_market.py

DeepFashion

Note: In our settings, we crop the images of DeepFashion into the resolution of 176x256 in a center-crop manner.

python tool/generate_fashion_datasets.py
  • Download train/test pairs and train/test key points annotations from Google Drive or Baidu Disk, including fasion-resize-pairs-train.csv, fasion-resize-pairs-test.csv, fasion-resize-annotation-train.csv, fasion-resize-annotation-train.csv. Put these four files under the fashion_data directory.
  • Generate the pose heatmaps. Launch
python tool/generate_pose_map_fashion.py

Notes:

Optionally, you can also generate these files by yourself.

  1. Keypoints files

We use OpenPose to generate keypoints.

  • Download pose estimator from Google Drive or Baidu Disk. Put it under the root folder Pose-Transfer.
  • Change the paths input_folder and output_path in tool/compute_coordinates.py. And then launch
python2 compute_coordinates.py
  1. Dataset split files
python2 tool/create_pairs_dataset.py

Train a model

Market-1501

python train.py --dataroot ./market_data/ --name market_PATN --model PATN --lambda_GAN 5 --lambda_A 10  --lambda_B 10 --dataset_mode keypoint --no_lsgan --n_layers 3 --norm batch --batchSize 32 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

DeepFashion

python train.py --dataroot ./fashion_data/ --name fashion_PATN --model PATN --lambda_GAN 5 --lambda_A 1 --lambda_B 1 --dataset_mode keypoint --n_layers 3 --norm instance --batchSize 7 --pool_size 0 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

Test the model

Market1501

python test.py --dataroot ./market_data/ --name market_PATN --model PATN --phase test --dataset_mode keypoint --norm batch --batchSize 1 --resize_or_crop no --gpu_ids 2 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-test.csv --which_epoch latest --results_dir ./results --display_id 0

DeepFashion

python test.py --dataroot ./fashion_data/ --name fashion_PATN --model PATN --phase test --dataset_mode keypoint --norm instance --batchSize 1 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-test.csv --which_epoch latest --results_dir ./results --display_id 0

Evaluation

We adopt SSIM, mask-SSIM, IS, mask-IS, DS, and PCKh for evaluation of Market-1501. SSIM, IS, DS, PCKh for DeepFashion.

1) SSIM and mask-SSIM, IS and mask-IS, mask-SSIM

For evaluation, Tensorflow 1.4.1(python3) is required. Please see requirements_tf.txt for details.

For Market-1501:

python tool/getMetrics_market.py

For DeepFashion:

python tool/getMetrics_market.py

If you still have problems for evaluation, please consider using docker.

docker run -v <Pose-Transfer path>:/tmp -w /tmp --runtime=nvidia -it --rm tensorflow/tensorflow:1.4.1-gpu-py3 bash
# now in docker:
$ pip install scikit-image tqdm 
$ python tool/getMetrics_market.py

Refer to this Issue.

2) DS Score

Download pretrained on VOC 300x300 model and install propper caffe version SSD. Put it in the ssd_score forlder.

For Market-1501:

python compute_ssd_score_market.py --input_dir path/to/generated/images

For DeepFashion:

python compute_ssd_score_fashion.py --input_dir path/to/generated/images

3) PCKh

  • First, run tool/crop_market.py or tool/crop_fashion.py.
  • Download pose estimator from Google Drive or Baidu Disk. Put it under the root folder Pose-Transfer.
  • Change the paths input_folder and output_path in tool/compute_coordinates.py. And then launch
python2 compute_coordinates.py
  • run tool/calPCKH_fashion.py or tool/calPCKH_market.py

Pre-trained model

Our pre-trained model can be downloaded Google Drive or Baidu Disk.

Citation

If you use this code for your research, please cite our paper.

@inproceedings{zhu2019progressive,
  title={Progressive Pose Attention Transfer for Person Image Generation},
  author={Zhu, Zhen and Huang, Tengteng and Shi, Baoguang and Yu, Miao and Wang, Bofei and Bai, Xiang},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2347--2356},
  year={2019}
}

Acknowledgments

Our code is based on the popular pytorch-CycleGAN-and-pix2pix.

pose-transfer's People

Contributors

jessemelpolio avatar tengteng95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pose-transfer's Issues

Unable to download dataset

I cannot seem to download deep fasion dataset in-shop clothes retrival benchmark.
the image file stops downloading after reaching certain limit

how many training data in Market1501 did you use to train your model?

Hi, This is nice work!
I am working on reproducing it but always get worse results than yours. The following test result are generated by the model trained for 50 epochs and 100 epochs.
0-50
0-100

I have noticed that you mentioned results would be good after 500 epochs in another issue.

Since if I use all data described in the training.csv for 500 epochs on RTX 2080 12G, it would take about 600+ hours with batchsize 32.
So could you tell me how many training data in Market1501 did you use to train your model?
And as you can see the results look a little blurry, can you give me any suggestion?

video demonstration

sorry,i want to bother you with a question. i train the model with my own data,and i had tested the result. Beause my data from the frames of video.So i want the generated images to stack the video. But i don't know whether there is someting wrong with my method or not.There is a sense of frustration between frames connected by the images. I wonder how you stack the images. Thanks a lot

What do "BP" and "P" stand for in the program?

Greetings! When we are training the model with market-1501 dataset, we have P_input_nc==3 and BP_input_nc === 18. I assume P_input_nc is the number of channels for input images, while BP_input_nc is the number of channels for input poses, aka the 18-keypoint representation. In this case, what do "BP" and "P" stand for respectively? Thanks in advance!

Inception Score and SSIM score on market-1501 of test

Hi! Thanks for your open-source code of your project. I have few questions about IS and SSIM scores
I followed the readme, using your data and code ,but i got the different result from your paper:
IS 3.1618128
Mask-IS 3.735669
SSIM 0.281508094
Mask-SSIM 0.799356671
what is the reason?

about pretrained model performance and deformable skip

Hi, I tested the provided pretrained generator on Market1501, but I got poor scores (SSIM: 0.281 IS: 3.16) which is not consistent with the paper. The version of Pytorch I used is 0.4. I also tested the model trained by myself, the perfomance is much better.

Anather question is about the deformaable skip, I am confused about how to combile PATN with deformable gans, could you give more details about what the skip connections connect?
Thanks.

Generated '.npy' only has one point not equal to 0 for each keypoint?

Firstly, thanks for your great work!
I followed your introduction python tool/generate_pose_map_fashion.py to generate some '.npy'. After completion, I check these '.npy' files, I fund that there is only one point not equals to 0 for each keypoint (it equals to 0). But I think it should be a Gauss map. Because you employ the function cords_to_map.
Where it is wrong?
THANKS!
@tengteng95

docker image for Evaluation

I have tried&failed 3000 times to do evaluation.
Finally I came to a conclusion: docker, docker, we need docker!

Share my solution below:

docker run -v <Pose-Transfer path>:/tmp -w /tmp --runtime=nvidia -it --rm tensorflow/tensorflow:1.4.1-gpu-py3 bash
# now in docker:
$ pip install scikit-image tqdm 
$ python tool/getMetrics_market.py

Compatibility problem of pretrained model.

Hi @tengteng95, thanks for your great work!
There is a compatibility problem when I want to load the pretrained model for testing:

RuntimeError: Error(s) in loading state_dict for PATNetwork:
        Unexpected running stats buffer(s) "model.stream1_down.2.running_mean" and "model.stream1_down.2.running_var" for InstanceNorm2d with track_running_stats=False. If state_dict is a checkpoint saved before 0.4.0, this may be expected because InstanceNorm2d does not track running stats by default since 0.4.0. Please remove these keys from state_dict. If the running stats are actually needed, instead set track_running_stats=True in InstanceNorm2d to enable them. See the documentation of InstanceNorm2d for details.

Can you give me some suggestions to solve this problem?
In addition, how to determine whether the training is completed or not, and how many epochs need?
THANKS!

How to obatin keypoint annotation?

Hello, this is really a wonderful project. I have some questions when reproducing the project:

  1. Did you generate market-pairs-train.csv, market-pairs-test.csv, market-annotation-train.csv, market-annotation-train.csv using this model: https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation ?
  2. What are the meanings of the 18 keypoints? (which is the eye, which is the shoulder, etc)
  3. Can I build my own dataset using the Realtime_Multi-Person_Pose_Estimation model?

Thank you!

How do you split datasets to get 'train.lst' and 'test.lst'

Thanks for your interesting work! How do you split the full fashion dataset into trainset and testset? I see the 'train.lst' and 'test.lst', but i want to know how to get these two 'lsts'. Randomly select in the whole sets or select test images with fixed number for each personID?

Trained Model for your network

Nice work. Could you provide a trained model for your network? I only found the pretrained model for pose estimation in the google drive.
Thanks.:smile:

when i test fashion-data,it shows {"eid": "main"} and stop here

Total number of parameters: 41358019

model [TransferModel] was created
WARNING:root:Setting up a new session...
POST /env/main HTTP/1.1
Host: localhost:8097
Accept: /
Connection: keep-alive
User-Agent: python-requests/2.21.0
Accept-Encoding: gzip, deflate
Content-Length: 15

{"eid": "main"}

different image naming format with pervious work

Hi, thanks for your great code~

One problem. I find that the image naming format is different between your work and PG2 (https://arxiv.org/pdf/1705.09368.pdf), you name the images as 'fashion+men/women+cloth+id+view.jpg', but for PG2, they rename the id from 00001. So would you mind to provide the mapping between your image names and those in PG2? In other words, how do you collect the new split under your naming format?

Thanks~~

no path separator ‘/’ of image paths in train.lst and test.lst

Hi, thanks for your interesting work! I found that there is no path separator ‘/’ of image paths in train.lst and test.lst, so when I run generate_fashion_datasets.py, I got following message:

cp: cannot stat 'fashion_data/DeepFashion/fashionWOMENTees_Tanksid0000290606_2side.jpg': No such file or directory cp: cannot stat 'fashion_data/DeepFashion/fashionWOMENPantsid0000687301_1front.jpg': No such file or directory
......
How can I fix this? Thanks a lot!

Channel size for stream2

Hi, thank you for such an interesting project!
I was trying to understand structure of PATBlocks but I was a bit confused by conv layers parameters.
I am assuming cated_stream2 is indicates that it is a stream of pose code (not the initial image code).
if cated_stream2:
conv_block += [nn.Conv2d(dim*2, dim*2, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim*2), nn.ReLU(True)]

else:
conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=p, bias=use_bias), norm_layer(dim), nn.ReLU(True)]

But still I am not sure why we had to set in_channel parameter as "dim * 2" and not directly "dim". Can you explain why?

Thanks!

Keypoint_y and keypoint_x in market-annotation-train.csv

Greetings! I'm wondering what keypoint_y and keypoint_x stand for in market-annotation-train.csv. I suppose they represent the y and x coordinates of those 18 keypoints? Also, to generate keypoints files by ourselves, are we supposed to change the path input_folder in tool/compute_coordinates.py to a directory that contains all our training images? Thanks in advance!

How to load the checkpoints?

I want to load the checkpoints saved in the ./checkpoints directory. Is there any way I can load them instead of restarting a new training everytime?

Bad IS and SSIM score

Really really thanks for your awesome works.

However, using your code and hyper-parameter, I got bad evaluation results after training by myself:
SSIM=0.268 IS=3.652 mask-SSIM=0.792 mask-IS=3.733

Meanwhile, using your provided pre-trained model, I got almost right results:
SSIM=0.311 IS=3.329 mask-SSIM=0.811 mask-IS=3.778

My training environment is pytorch0.3.1, I don't know why the results are different.

Thanks again~

Over-fitting problems trained on market datasets

I have trained a model over the market dataset. (But I do not use the hyperparameters you provided.) I found that the performance of the model differs greatly between the training set and the test set.
Have you ever encountered over-fitting problems when training over the market dataset?

Although there are 200000+ training pairs, only 1000+ persons included.
Meanwhile, as the backgrounds are different, the reconstruction loss will dominate, which I think will lead to over-fitting.

ResnetDiscriminator downsampling number

Hello!
I have a question about ResnetDiscriminator's downsampling process.

As far as I see n_downsampling is either smaller or equal to 2 or it is 3 (there is no conv_layers for n_downsampling > 3).
My question is, is there an intuition behind it or did you limited the number of downsampling according to the results or something else?

invalid device ordinal

THCudaCheck FAIL file=..\torch\csrc\cuda\Module.cpp line=37 error=10 : invalid device ordinal
Traceback (most recent call last):
File "test.py", line 10, in
opt = TestOptions().parse()
File "C:\Projects\Pose-Transfer\options\base_options.py", line 75, in parse
torch.cuda.set_device(self.opt.gpu_ids[0])
File "C:\Users\mohit\AppData\Local\Programs\Python\Python37\lib\site-packages\torch\cuda_init_.py", line 300, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: cuda runtime error (10) : invalid device ordinal at ..\torch\csrc\cuda\Module.cpp:37

Device config;
Intel Core i7-8750H
NVIDIA GTX-1050ti 4GB VRAM
32 GB DDR4 Memory (2667 MHz)
PLEASE HELP..

如何生成gif

我运行测试数据集成功后,返回的是五张图片拼接的结果,如何测试自己的生活中的图片来生成GIF呢

How to reduce noises in generated images?

I trained a model on my computer using your code, and find that images generated by my model have more noises than those generated by your pre-trained model. I want to know if there is any trick during initializing or training? Thanks.

Here are some examples:

My model:

fashionMENJackets_Vestsid0000065304_4full jpg___fashionMENJackets_Vestsid0000065304_3back jpg_vis

Pre-trained model:
fashionMENJackets_Vestsid0000065304_4full jpg___fashionMENJackets_Vestsid0000065304_3back jpg_vis

My model:
fashionMENJackets_Vestsid0000065304_4full jpg___fashionMENJackets_Vestsid0000065304_1front jpg_vis

Pre-trained model:
fashionMENJackets_Vestsid0000065304_4full jpg___fashionMENJackets_Vestsid0000065304_1front jpg_vis

I trained my model using all the hyper-parameters described in README for 700 epochs.

No model details in ‘pose_estimator.h5’

Hi, I tried to run 'compute_coordinates.py' to generate keypoints files for the DukeMTMC-reID dataset. But there is a warning "No training configuration found in save file", and keypoint data in the csv file are all -1.
I will be so appreciate if you could help me with the problem :-)

training with my data

Hello, I trained with my own data. During the test, I found that if the movements of the source image and the target image were too different, the generated person's arms (between shoulder and elbow) were always broken. May I ask if you have this kind of problem?

How did you processed the Fashion Dataset?

Thanks for you amazing work! I noticed that in your pose-heatmap generation script here the image size is set to be (256, 176) where in my downloaded dataset all the images are square (256 * 256). Do I need to center-crop the images in my dataset to 256 * 176 or should I just carry on training and ignore the inconsistency? Looking forward to hearing from you!

Feature requests...

It is a good project! I managed to generate results from master but not the latest branch...

I have hard time reproducing the results though mainly due to the environment/packages compatibility issues, it would be really nice if we could have a dockerfile or pipfile, or a shared docker image! (e.g. I could not install the roi_align in my local computer..)

Best :)

about pose transfer quality

you using paired samples for training, maybe it will get worse results when I give a specified pose that not appeared in one ids?

estimator not work

你好。人体姿态估计模型输出关键点数值全是-1,好像有问题,这使得我无法对market1501的query数据集进行测试。希望能获得一份query的标注,或是一个有效的estimator。由于解决不了上述问题,我还尝试了openpose,但如何生成Pose-Transfer需要的标注文件仍有疑问。还请对标注文件生成的模型或方法进一步解释,万分感谢。

How to use the pre-trained model ‘latest_net_netG.pth’

Nice work.I downloaded your trained model latest_net_netG.pth, but I don't know how to use it. I am new to github. I could only load some paramters from latest_net_netG.pth. Could you tell me which I should do to use the .pth file?

How do you split datasets to get train.lst and test.lst'

Thanks for your interesting work! How do you split the full fashion dataset into trainset and testset? I see the 'train.lst' and 'test.lst', but i want to know how to get these two 'lsts'. Randomly select in the whole sets or select test images with fixed number for each personID?

Why so many training iterations ? (500 + 200) * 4000

Thanks a lot for this awesome paper as well as the open-source codes.

In the paper, the authors claim that the model is trained with Adam for 90K iterations, where the learning rate decays to 0 during the latter 60K iterations.

However, in the codes I find that the total iteration is set to (500 + 200) * 4000, which is far more than the setting of the paper.

Could you please explain this for me? Thanks again.

Limit on the keypoint dataset

Firstly, thanks for the great work and open source code.

At the data/keypoint.py file, you have manually set the dataset length as 4000. It limits training set to 4000 image pairs, and I believe this should be changed to return self.size.

    def __len__(self):
        if self.opt.phase == 'train':
            return 4000

Maybe I'm missing something. Is there a reason for limiting the dataset?

how to use the generated data?

Hi. How can I use the generated data by the pose-transfer model to expand the reid training set? However, I found that there was a clear difference between the target and the original, that is the object ID had been changed.

The size of tensor a (16) must match the size of tensor b (320) at non-singleton dimension 3

....
Total number of parameters: 41365763


model [TransferModel] was created
WARNING:root:Setting up a new session...
200
3
False
process 0/999999 img ..
Traceback (most recent call last):
File "test.py", line 39, in
model.test()
File "/home/ldf/www/Pose-Transfer/models/PATN.py", line 137, in test
self.fake_p2 = self.netG(G_input)
File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 165, in forward
return nn.parallel.data_parallel(self.model, input, self.gpu_ids)
File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line 115, in data_parallel
return module(*inputs[0], **module_kwargs[0])
File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 148, in forward
x1, x2, _ = model(x1, x2)
File "/home/ldf/www/py35_pytorch031/lib/python3.5/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/ldf/www/Pose-Transfer/models/model_variants.py", line 63, in forward
x1_out = x1_out * att
RuntimeError: The size of tensor a (16) must match the size of tensor b (320) at non-singleton dimension 3

where are fasion-annotation-test.csv',fasion-annotation-train.csv?

in resize_fashion.py,line46,
resize_annotations(root_dir + 'fasion-annotation-test.csv', root_dir + 'fasion-resize-annotation-test.csv')

resize_dataset(root_dir + '/train', root_dir + 'fashion_resize/train')
resize_annotations(root_dir + 'fasion-annotation-train.csv', root_dir + 'fasion-resize-annotation-train.csv')
Ijust find fasion-resize-pairs-train.csv, fasion-resize-pairs-test.csv

FileNotFoundError: [Errno 2] No such file or directory: 'market_data/trainK/0432_c5s1_105323_05.jpg.npy'

root@1123a6f58736:/home/Pose-Transfer-master/Pose-Transfer# python tool/generate_pose_map_market.py
processing 0 / 12936 ...
market_data/trainK 0432_c5s1_105323_05.jpg
Traceback (most recent call last):
File "tool/generate_pose_map_market.py", line 41, in
compute_pose(img_dir, annotations_file, save_path)
File "tool/generate_pose_map_market.py", line 39, in compute_pose
np.save(file_name, pose)
File "/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py", line 524, in save
fid = open(file, "wb")
FileNotFoundError: [Errno 2] No such file or directory: 'market_data/trainK/0432_c5s1_105323_05.jpg.npy'

sigma in generate_pose_map_fashion.py

Hi, thanks for your job! The param sigma in generate_pose_map_fashion.py
compute_pose(img_dir, annotations_file, save_path) don't have default value. What value can I use?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.