Code Monkey home page Code Monkey logo

vocgan's Introduction

Modified VocGAN


This repo implements modified version of [VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network](https://arxiv.org/abs/2007.15256) using Pytorch, for actual VocGAN checkout to `baseline` branch. I bit modify the VocGAN's generator and used Full-Band MelGAN's discriminator instead of VocGAN's discriminator, as in my research I found MelGAN's discriminator is very fast while training and enough powerful to train Generator to produce high fidelity voice whereas VocGAN Hierarchically-nested JCU discriminator is quite huge and extremely slows the training process.

Tested on Python 3.6

pip install -r requirements.txt

Prepare Dataset

  • Download dataset for training. This can be any wav files with sample rate 22050Hz. (e.g. LJSpeech was used in paper)
  • preprocess: python preprocess.py -c config/default.yaml -d [data's root path]
  • Edit configuration yaml file

Train & Tensorboard

  • python trainer.py -c [config yaml file] -n [name of the run]

    • cp config/default.yaml config/config.yaml and then edit config.yaml
    • Write down the root path of train/validation files to 2nd/3rd line.
  • tensorboard --logdir logs/

Notes

  1. This repo implements modified VocGAN for faster training although for true VocGAN implementation please checkout baseline branch, In my testing I am available to generate High-Fidelity audio in real time from Modified VocGAN.
  2. Training cost for baseline VocGAN's Discriminator is too high (2.8 sec/it on P100 with batch size 16) as compared to Generator (7.2 it/sec on P100 with batch size 16), so it's unfeasible for me to train this model for long time.
  3. May be we can optimizer baseline VocGAN's Discriminator by downsampling the audio on pre-processing stage instead of Training stage (currently I used torchaudio.transform.Resample as layer for downsampling the audio), this step might be speed-up overall Discriminator training.
  4. I trained baseline model for 300 epochs (with batch size 16) on LJSpeech, and quality of generated audio is similar to the MelGAN at same epoch on same dataset. Author recommend to train model till 3000 epochs which is not feasible at current training speed (2.80 sec/it).
  5. I am open for any suggestion and modification on this repo.
  6. For more complete and end to end Voice cloning or Text to Speech (TTS) toolbox ๐Ÿค– please visit Deepsync Technologies.

Inference

  • python inference.py -p [checkpoint path] -i [input mel path]

Pretrained models

Two pretrained model are provided. Both pretrained models are trained using modified-VocGAN structure.

Audio Samples

Using pretrained models, we can reconstruct audio samples. Visit here to listen.

Results

[WIP]

References

vocgan's People

Contributors

0xflotus avatar carankt avatar jacinder avatar jackson-kang avatar levinna avatar rishikksh20 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vocgan's Issues

Any pretrained models?

Hello!
Are there any pretrained models available?

Also, big thanks for your beautiful repo ๐Ÿคฉ

KeyError: '__getstate__'

F:\ProgramData\Anaconda3\python.exe F:/work/VocGAN-master/trainer.py
F:\ProgramData\Anaconda3\lib\site-packages\torchaudio\extension\extension.py:14: UserWarning: torchaudio C++ extension is not available.
warnings.warn('torchaudio C++ extension is not available.')
F:\ProgramData\Anaconda3\lib\site-packages\torchaudio\backend\utils.py:64: UserWarning: The interface of "soundfile" backend is planned to change in 0.8.0 to match that of "sox_io" backend and the current interface will be removed in 0.9.0. To use the new interface, do torchaudio.USE_SOUNDFILE_LEGACY_INTERFACE = False before setting the backend to "soundfile". Please refer to pytorch/audio#903 for the detail.
'The interface of "soundfile" backend is planned to change in 0.8.0 to '
Generator :

Trainable Parameters: 4.714M
Discriminator :

Trainable Parameters: 4.355M
2020-12-15 16:11:56,529 - INFO - Starting new training run.
Validation loop: 0%| | 0/36 [00:00<?, ?it/s]F:\ProgramData\Anaconda3\lib\site-packages\torch\functional.py:516: UserWarning: stft will require the return_complex parameter be explicitly specified in a future PyTorch release. Use return_complex=False to preserve the current behavior or return_complex=True to return a complex output. (Triggered internally at ..\aten\src\ATen\native\SpectralOps.cpp:653.)
normalized, onesided, return_complex)
F:\ProgramData\Anaconda3\lib\site-packages\torch\functional.py:516: UserWarning: The function torch.rfft is deprecated and will be removed in a future PyTorch release. Use the new torch.fft module functions, instead, by importing torch.fft and calling torch.fft.fft or torch.fft.rfft. (Triggered internally at ..\aten\src\ATen\native\SpectralOps.cpp:590.)
normalized, onesided, return_complex)
g 8.2491 d 2.8965 ad 0.9612| step 0: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 36/36 [00:05<00:00, 6.67it/s]
Loading train data: 0%| | 0/40 [00:00<?, ?it/s]
2020-12-15 16:12:04,076 - INFO - Exiting due to exception: 'getstate'
Traceback (most recent call last):
File "F:\work\VocGAN-master\utils\train.py", line 81, in train
(melD, audioD) in loader:
File "F:\ProgramData\Anaconda3\lib\site-packages\tqdm\std.py", line 1127, in iter
for obj in iterable:
File "F:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 352, in iter
return self._get_iterator()
File "F:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "F:\ProgramData\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 801, in init
w.start()
File "F:\ProgramData\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "F:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "F:\ProgramData\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "F:\ProgramData\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "F:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
KeyError: 'getstate'

Metallic / Robotic sound

I'm trying to train the VocGan model using two stages: STFT-Pretraining & Adversarial Loss training (+STFT Loss), but I face with metallic/robotic speech sound problem. If I training MelGan, then I get nearly-normal speech in about ~100 epochs, but VocGan demonstrating significantly worse results even with more epochs (100, 200, 300, ...).

Is it normal? Or maybe I just need to wait more time?

If it is not normal, what I probably need to check in my model and pipeline? (I forked your repo, but I needed to adapt the model to work with 200 hop_length and 16 000 sampling_rate).

Assertion error: torchaaudio resample_waveform related

I am facing assertion error and the log is as follows:

Traceback (most recent call last):
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/utils/train.py", line 98, in train
disc_real, disc_real_multiscale = model_d(audioG, melG)
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/model/hierarchical_discriminator.py", line 30, in forward
x_ = down_(x)
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/venv/lib/python3.6/site-packages/torchaudio/transforms.py", line 382, in forward
return kaldi.resample_waveform(waveform, self.orig_freq, self.new_freq)
File "/home/stuart/sagar/speech_analysis_synth/VocGAN/venv/lib/python3.6/site-packages/torchaudio/compliance/kaldi.py", line 802, in resample_waveform
assert waveform.dim() == 2
AssertionError

Any wayouts from this issue will be helpful.
Thanks
Sagar

validation path and valid data

Hi, after preprocess of dataset, I have got the extracted mels in my assigned mel-path.
But for next running of trainner.py, there should be proper validation path as the 3rd line in config:
validation: 'H:\Deepsync\backup\deepsync\LJSpeech-1.1\valid'
I am confused of which part of data should I put in this valid path, the paired mels and wavs for validation?
Since you have designated the LJ/wavs as the train data path, then how to split the data as training part and validation part?
Looking forward to your reply and very thanks!

Inference speed on CPU

In the paper, it's said that VocGAN can synthesizes speech waveforms 3.24x faster on a CPU than real-time. Did you verify it? Thanks.

Should there be any changes in the inference code for Korean inputs?

Hi! I noticed that you have a pretrained model trained on KSS dataset and I have trained a mel synthesizer model on the same dataset. When I use Griffin-Lim algorithm to reconstruct the linear output, I get a good enough generated speech. However, when I used the mel output of the synthesizer as the input to your model, the output is 11 seconds of nothing. Should there be any modifications when using Korean inputs?

Training speed?

How fast training on what hardware for this modified implementation (gen and dis)? How fast training finishes?

[Question] Multi-Speaker Training

Hi rishikksh20!! Thank you for your excellent code.
I have a question about multi-speaker training.

Can I train VocGAN in a multi-speaker environment?
I used WaveRNN, trained with five speakers. I want to know if this VocGAN is also possible, and if it is possible, will it be a little lower in sound quality?

Thank You!

Can't I change the length of my voice length?

Hello, I tried to use a pretrained model with kss data, but a 1 second wav file was printed out. Can you only make a 1-second file with that model? If I want a 7-second voice file, which part should I modify? Thank you.

JCU Discriminator implementation details

Hi, thanks for your implementation which already helps me a lot, but I still have several questions:

  1. As for the JCU discriminator, author mentioned that they use a convolution module to condition the input mel spectrogram to compute the conditional output (Fig.2), in your codes, mel and transformed waveform are simply cat in temporal dimension with different length (actually 32 times difference). This concatenated result would be improper for later computation? So how do you think of performing this conditional convolution?

  2. In your codes, the melgan discriminator outputs are composed of n_layers=3 of feature maps and out_score, but this number of layers in discriminator is 4 if I understood right. So why do you change this layer setting for fm output?

  3. VocGAN also mentioned that they consider feature matching loss then give a sum-up loss optimization (Eq.9 in paper). But in your implementation, there seems only exist the conditional and unconditional outputs of each JCU discriminator without such groups of feature maps outputs? So how to get this part for computation of L(FM)?

Looking forward to your reply and so many thanks!

[Proposal] Reduce training time by resampling beforehand

The biggest bottleneck in your model, regarding speed, is the Resample layer in the Hierarchical Discriminator. The speed on it is just awful. If you resample your dataset beforehand and load it from disk, your training will be up to 3 times faster.

Is it possible to train KSS dataset in master branch?

Thanks for sharing great result.
I want to train kss data for training vocoder to use in fastspeech2 with master branch code, is it possible?

Avg : g 1.3729 d 0.0000 ad 0.0000| step 5976: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 166/166 [05:01<00:00,  1.81s/it]
Avg : g 1.3659 d 0.0000 ad 0.0000| step 6142: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 166/166 [05:01<00:00,  1.82s/it]
Avg : g 1.3601 d 0.0000 ad 0.0000| step 6308: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 166/166 [05:06<00:00,  1.85s/it]
Avg : g 1.3537 d 0.0000 ad 0.0000| step 6474: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 166/166 [05:02<00:00,  1.82s/it]
Avg : g 1.3493 d 0.0000 ad 0.0000| step 6640: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 166/166 [05:05<00:00,  1.84s/it]
g 2.5462 d 3.1285 ad 1.0426| step 6640: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2196/2196 [00:46<00:00, 47.20it/s]
Avg : g 1.3415 d 0.0000 ad 0.0000| step 6680:  24%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–Œ                                                                                   | 40/166 [01:14<01:10,  1.79it/s

Because d Loss and ad Loss didn't show any result, When I trained.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.