Code Monkey home page Code Monkey logo

styleclip's Introduction

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery (ICCV 2021 Oral)

Run this model on Replicate

Optimization: Open In Colab Mapper: Open In Colab

Global directions Torch: Open In Colab Global directions TF1: Open In Colab

Full Demo Video:     ICCV Video

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
Or Patashnik*, Zongze Wu*, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski
*Equal contribution, ordered alphabetically
https://arxiv.org/abs/2103.17249

Abstract: Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees of freedom, or an annotated collection of images for each desired manipulation. In this work, we explore leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort. We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt. Next, we describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable textbased manipulation. Finally, we present a method for mapping a text prompts to input-agnostic directions in StyleGAN’s style space, enabling interactive text-driven image manipulation. Extensive results and comparisons demonstrate the effectiveness of our approaches.

Description

Official Implementation of StyleCLIP, a method to manipulate images using a driving text. Our method uses the generative power of a pretrained StyleGAN generator, and the visual-language power of CLIP. In the paper we present three methods:

  • Latent vector optimization.
  • Latent mapper, trained to manipulate latent vectors according to a specific text description.
  • Global directions in the StyleSpace.

Updates

31/10/2022 Add support for global direction with torch implementation

15/8/2021 Add support for StyleSpace in optimization and latent mapper methods

6/4/2021 Add mapper training and inference (including a jupyter notebook) code

6/4/2021 Add support for custom StyleGAN2 and StyleGAN2-ada models, and also custom images

2/4/2021 Add the global directions code (a local GUI and a colab notebook)

31/3/2021 Upload paper to arxiv, and video to YouTube

14/2/2021 Initial version

Setup (for all three methods)

For all the methods described in the paper, is it required to have:

Specific requirements for each method are described in its section. To install CLIP please run the following commands:

conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=<CUDA_VERSION>
pip install ftfy regex tqdm gdown
pip install git+https://github.com/openai/CLIP.git

Editing via Latent Vector Optimization

Setup

Here, the code relies on the Rosinality pytorch implementation of StyleGAN2. Some parts of the StyleGAN implementation were modified, so that the whole implementation is native pytorch.

In addition to the requirements mentioned before, a pretrained StyleGAN2 generator will attempt to be downloaded, (or manually download from here).

Usage

Given a textual description, one can both edit a given image, or generate a random image that best fits to the description. Both operations can be done through the main.py script, or the optimization_playground.ipynb notebook (Open In Colab).

Editing

To edit an image set --mode=edit. Editing can be done on both provided latent vector, and on a random latent vector from StyleGAN's latent space. It is recommended to adjust the --l2_lambda according to the desired edit.

Generating Free-style Images

To generate a free-style image set --mode=free_generation.

Editing via Latent Mapper

Here, we provide the code for the latent mapper. The mapper is trained to learn residuals from a given latent vector, according to the driving text. The code for the mapper is in mapper/.

Setup

As in the optimization, the code relies on Rosinality pytorch implementation of StyleGAN2. In addition the the StyleGAN weights, it is neccessary to have weights for the facial recognition network used in the ID loss. The weights can be downloaded from here.

The mapper is trained on latent vectors. It is recommended to train on inverted real images. To this end, we provide the CelebA-HQ that was inverted by e4e: train set, test set.

Usage

Training

  • The main training script is placed in mapper/scripts/train.py.
  • Training arguments can be found at mapper/options/train_options.py.
  • Intermediate training results are saved to opts.exp_dir. This includes checkpoints, train outputs, and test outputs. Additionally, if you have tensorboard installed, you can visualize tensorboard logs in opts.exp_dir/logs. Note that
  • To resume a training, please provide --checkpoint_path.
  • --description is where you provide the driving text.
  • If you perform an edit that is not supposed to change "colors" in the image, it is recommended to use the flag --no_fine_mapper.

Example for training a mapper for the moahwk hairstyle:

cd mapper
python train.py --exp_dir ../results/mohawk_hairstyle --no_fine_mapper --description "mohawk hairstyle"

All configurations for the examples shown in the paper are provided there.

Inference

  • The main inferece script is placed in mapper/scripts/inference.py.
  • Inference arguments can be found at mapper/options/test_options.py.
  • Adding the flag --couple_outputs will save image containing the input and output images side-by-side.

Pretrained models for variuos edits are provided. Please refer to utils.py for the complete links list.

We also provide a notebook for performing inference with the mapper Mapper notebook: Open In Colab

Editing via Global Direction

Here we provide GUI for editing images with the global directions. We provide both a jupyter notebook Open In Colab, and the GUI used in the video. For both, the linear direction are computed in real time. The code is located at global_directions/.

Setup

Here, we rely on the official TensorFlow implementation of StyleGAN2.

It is required to have TensorFlow, version 1.14 or 1.15 (conda install -c anaconda tensorflow-gpu==1.14).

Usage

Local GUI

To start the local GUI please run the following commands:

cd global_directions

# input dataset name 
dataset_name='ffhq' 

# pretrained StyleGAN2 model from standard [NVlabs implementation](https://github.com/NVlabs/stylegan2) will be download automatically.
# pretrained StyleGAN2-ada model could be download from https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada/pretrained/ .
# for custom StyleGAN2 or StyleGAN2-ada model, please place the model under ./StyleCLIP/global_directions/model/ folder.


# input prepare data 
python GetCode.py --dataset_name $dataset_name --code_type 'w'
python GetCode.py --dataset_name $dataset_name --code_type 's'
python GetCode.py --dataset_name $dataset_name --code_type 's_mean_std'

# preprocess (this may take a few hours). 
# we precompute the results for StyleGAN2 on ffhq, StyleGAN2-ada on afhqdog, afhqcat. For these model, we can skip the preprocess step.
python SingleChannel.py --dataset_name $dataset_name

# generated image to be manipulated 
# this operation will generate and replace the w_plu.npy and .jpg images in './data/dataset_name/' folder. 
# if you you want to keep the original data, please rename the original folder.
# to use custom images, please use e4e encoder to generate latents.pt, and place it in './data/dataset_name/' folder, and add --real flag while running this function.
# you may skip this step if you want to manipulate the real human faces we prepare in ./data/ffhq/ folder.   
python GetGUIData.py --dataset_name $dataset_name

# interactively manipulation 
python PlayInteractively.py --dataset_name $dataset_name

As shown in the video, to edit an image it is requires to write a neutral text and a target text. To operate the GUI, please do the following:

  • Maximize the window size
  • Double click on the left square to choose an image. The images are taken from global_directions/data/ffhq, and the corresponding latent vectors are in global_directions/data/ffhq/w_plus.npy.
  • Type a neutral text, then press enter
  • Modify the target text so that it will contain the target edit, then press enter.

You can now play with:

  • Manipulation strength - positive values correspond to moving along the target direction.
  • Disentanglement threshold - large value means more disentangled edit, just a few channels will be manipulated so only the target attribute will change (for example, grey hair). Small value means less disentangled edit, a large number of channels will be manipulated, related attributes will also change (such as wrinkle, skin color, glasses).
Examples:
Edit Neutral Text Target Text
Smile face smiling face
Gender female face male face
Blonde hair face with hair face with blonde hair
Hi-top fade face with hair face with Hi-top fade hair
Blue eyes face with eyes face with blue eyes

More examples could be found in the video and in the paper.

Pratice Tips:

In the terminal, for every manipulation, the number of channels being manipulated is printed (the number is controlled by the attribute (neutral, target) and the disentanglement threshold).

  1. For color transformation, usually 10-20 channels is enough. For large structure change (for example, Hi-top fade), usually 100-200 channels are required.
  2. For an attribute (neutral, target), if you give a low disentanglement threshold, there are just few channels (<20) being manipulated, and usually it is not enough for performing the desired edit.

Notebook

Open the notebook in colab and run all the cells. In the last cell you can play with the image.

beta corresponds to the disentanglement threshold, and alpha to the manipulation strength.

After you set the desired set of parameters, please run again the last cell to generate the image.

Editing Examples

In the following, we show some results obtained with our methods. All images are real, and were inverted into the StyleGAN's latent space using e4e. The driving text that was used for each edit appears below or above each image.

Latent Optimization

Latent Mapper

Global Directions

Related Works

The global directions we find for editing are direction in the S Space, which was introduced and analyzed in StyleSpace (Wu et al).

To edit real images, we inverted them to the StyleGAN's latent space using e4e (Tov et al.).

The code strcuture of the mapper is heavily based on pSp.

Citation

If you use this code for your research, please cite our paper:

@InProceedings{Patashnik_2021_ICCV,
    author    = {Patashnik, Or and Wu, Zongze and Shechtman, Eli and Cohen-Or, Daniel and Lischinski, Dani},
    title     = {StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {2085-2094}
}

styleclip's People

Contributors

andreasjansson avatar betterze avatar lukestanley avatar minimaxir avatar orpatashnik avatar ozmig77 avatar renhaa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

styleclip's Issues

What is the meaning of fs3?

Hi,I am following your work and thank you for your excellent paper.

However, I have some confusion, please help me.

  1. What is the meaning of fs3 in the code, I don't know what that means or what it does.
  2. What do you estimate with 200 images, which I don't really understand in the paper.
  3. How to get delta_i or delta_ic in the paper?

Looking forward to your responses, thank you very much!!!

wrong result when editing other image with GUI

Hi, Thanks for your awesome work!
I tried to edit with GUI by python PlayInteractively.py --dataset_name $dataset_name. On the official demo image, I get a good result by editing, that's really cool! Unfortunately, when I tried to edit with the other images which I put into the test dataset after e4e inverted (python GetGUIData.py --dataset_name $dataset_name), it seems to change to another image after editing? Images are in 1024x1024 resolution as same as the demo given. What should I do editing with other images? Thanks!

official image editing result:
src
other image editing result:
ours

[Help] Unbound Local Error

How do you fix "UnboundLocalError: local variable 'shape' referenced before assignment". I have no coding experience so hopefully it's not too hard.

OOM Using 8GB of GPU RAM

The following command

python SingleChannel.py --dataset_name $dataset_name
and
python PlayInteractively.py --dataset_name $dataset_name

failed to work on a 2080S with 8GB of GPU Ram.

RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 363.13 MiB already allocated; 768.00 KiB free; 368.00 MiB reserved in total by PyTorch)

Is there anyway to reduce the required GPU RAM?

[Feature Request] Freeze / Limit Layers for Color Preservation

An issue that I'm encountering quite a lot for free_generation is whacky colors. For FFHQ 1024, of the 18 layers the top 6 are basically just global color manipulation judging from my experience with similar 1024px portraits StyleGAN2 models, albeit not FFHQ exactly and instead whatever modification of it is used on Artbreeder with more digital paintings and such in it.

If the topmost layers (or perhaps a user-set list of layers) could be frozen or given something like its own l2_lambda to limit how far it can diverge from the starting position, or how quickly it can diverge rather than distance, perhaps the colors will appear more natural and pleasing, while also forcing StyleCLIP to make changes to the facial content itself rather than wasting steps on things like white balance adjustments, which may be completely unwanted.

Segmentation fault (core dumped)

Error following the command when editing via global direction:
python GetGUIData.py --dataset_name $dataset_name

error Segmentation fault (core dumped) in Inference.py, line 14: self.model, preprocess = clip.load("ViT-B/32", device=device). Note, device is in "cuda" and clip model can loaded as same code in terminal.

How to fix this problem?
Thanks!

The choice of parameters

Thank you for sharing.

I played on the colab and obtained unpleasing results at most time. It is hard to decide what vaule to use if different edits require different parameters of the CLIP loss.

Then I tried to use the pretrained CLIP model in my own work TediGAN.

clip_results

The obtained results are not sensitive to the weights.

clip_results_cw

Adding additional perceptual loss and image reconstruction loss may help stabilize the optimization process.

About optimization method vs mapper

Thanks for your nice work! I have tested your code, however, I find one issue which seems interesting. When I use your optimization method to manipulate the person with purple hair, the result has hardly changed compared with original image. When I use your mapper, the result seems very nice.
imageimage
original image, optimization, mapper, respectively
So I also tried many other descriptions and images, there also exists the same issue. In my opinion, the optimization method should be better than the encoder approach, so I am very confused about this phenomenon. Do you fine the same issue in your experiment? Can you give me some explaination about this issue? Looking forward to your reply:)

Preprocessing Bug

There's a small bug in run_optimization.py which could affect quality. The optimization seems to be learning around it.

Bug
The output of StyleGAN is directly passed into CLIP here.

How to fix

  1. StyleGAN outputs values in range [-1, 1] but some values fall outside the range so it needs to be clamped.
  2. The values need to be scaled from [-1, 1] to [0, 1]
  3. The values need to be normalized using the CLIP preprocessing.

No such file or directory: './npy/ffhq/W.npy' when execute python GetCode.py

I try the code in StyleCLIP_global.ipynb, the first job is to prepare the input data. When I execute the following commends, I get the FileNotFoundError:

Traceback (most recent call last):
File "GetCode.py", line 214, in
save_tmp=GetS(dataset_name,num_img=2_000)
File "GetCode.py", line 132, in GetS
dlatents=np.load(tmp)[:num_img]
File "/Users/liukai/opt/anaconda3/lib/python3.8/site-packages/numpy/lib/npyio.py", line 416, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: './npy/ffhq/W.npy'

dataset_name='ffhq' #@param ['ffhq'] {allow-input: true}

input dataset name

#%cd /content/StyleCLIP/global/

input prepare data

!python GetCode.py --dataset_name $dataset_name --code_type 'w'
!python GetCode.py --dataset_name $dataset_name --code_type 's'
!python GetCode.py --dataset_name $dataset_name --code_type 's_mean_std'

Anyone can help, thanks.

[Feature Request] StyleCLIP GUI

Very impressive paper, amazing to see the model in action.

Would it be possible to add the GUI which was used in the youtube video to the repository?

editing latent generated by e4e using interfacegan

hello,
Thanks for your great work.

How to get age editing(e4e + interfacegan), quality as shown in paper(page 28-31)?

I tried to save latent vector generated by e4e encoder with the help of code given in StyleCLIP_global.ipynb.
The code line:
dlatents_loaded=M.W2S(w_plus)

it saving latent in z space.

I loaded this latent vector in interfacegan to edit, but z space not working properly. The age modification result produced by editing z space not matching with results shown in paper(page no 29).

Is w space necessary to get good result?

[Notebook Form] "description" should be specified as a String

Currently the form with the description input is description = 'A person with purple hair' #@param but this results in the user having to add quotes to the description themselves. Instead, it should be description = 'A person with purple hair' #@param {type:"string"} to have Google Colab apply the quotes automatically on its own.

I think latent_path = None #@param should also have {type:"string"} specified in case of spaces, and to avoid needing the user to add quotes on their own.

CUDA 11 environment installation config?

Hey there, been having some trouble installing on RTX 30 series (which require CUDA 11), and was hoping I could get some tips on how to get it working (either from the developers or other users who've encountered the same problem?)

Not sure if this is being run on a CUDA 11 environment in your group at present (if so please let me know what I've missed!), I've not been able to run the global/ subdirectory code, beginning with GetCode.py on either Python 3.6 or 3.7 (while 3.8 and above are incompatible, they require TensorFlow 2.2).

I installed via conda as the pip installed TensorFlow was built against CUDA 10, and raised errors about missing *.so.10 libraries as a result, which disappeared when using the conda-forge package.

After getting an error that "Setting up TensorFlow plugin "fused_bias_act.cu": Failed!" I tried some advice on an NVIDIA forum post for StyleGAN2, to change line 135 of global/dnnlib/tflib/custom_ops.py to

compile_opts += f' --compiler-options \'-fPIC -D_GLIBCXX_USE_CXX11_ABI=1\''

However this had no effect: there still seems to be a failure to register the GPU.

To check whether I can use the environment I'm running

StyleCLIP/global $ python GetCode.py --code_type "w"

My environment setups (3.6 and 3.7 respectively) are as follows (after each is the error output for that environment)

Click to show setup for Python 3.6, CUDA 11.0.221, PyTorch 1.7.1, TensorFlow 1.14.0

conda create -n styleclip4
conda activate styleclip4
conda install -y "python<3.7" -c conda-forge # Python 3.6.13 (restricted by TensorFlow 1.x dependency)
conda install -y pytorch==1.7.1 torchvision "cudatoolkit<11.2" -c pytorch
# PyPi tensorflow-gpu package is built for CUDA 10, incompatible with 11, use conda-forge community package
conda install "tensorflow-gpu<2" -c conda-forge # 1.14.0
pip install git+https://github.com/openai/CLIP.git # forces pytorch 1.7.1 install
pip install pandas requests opencv-python matplotlib scikit-learn gdown
gdown https://drive.google.com/u/0/uc?id=1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT&export=download
git clone https://github.com/omertov/encoder4editing.git

Gives:

/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/louis/miniconda3/envs/styleclip4/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Setting up TensorFlow plugin "fused_bias_act.cu": Failed!
Traceback (most recent call last):
  File "GetCode.py", line 284, in <module>
    GetCode(Gs,random_state,num_img,num_once,dataset_name)
  File "GetCode.py", line 109, in GetCode
    dlatent_avg=Gs.get_var('dlatent_avg')
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 396, in get_var
    return self.find_var(var_or_local_name).eval()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 391, in find_var
    return self._get_vars()[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 297, in _get_vars
    self._vars = OrderedDict(self._get_own_vars())
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 286, in _get_own_vars
    self._init_graph()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 151, in _init_graph
    out_expr = self._build_func(*self._input_templates, **build_kwargs)
  File "<string>", line 187, in G_main
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 232, in input_shape
    return self.input_shapes[0]
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 219, in input_shapes
    self._input_shapes = [t.shape.as_list() for t in self.input_templates]
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 267, in input_templates
    self._init_graph()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 151, in _init_graph
    out_expr = self._build_func(*self._input_templates, **build_kwargs)
  File "<string>", line 491, in G_synthesis_stylegan2
  File "<string>", line 455, in layer
  File "<string>", line 99, in modulated_conv2d_layer
  File "<string>", line 68, in apply_bias_act
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 72, in fused_bias_act
    return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain, clamp=clamp)
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 132, in _fused_bias_act_cuda
    cuda_op = _get_plugin().fused_bias_act
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 18, in _get_plugin
    return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu')
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/custom_ops.py", line 139, in get_plugin
    compile_opts += f' --gpu-architecture={_get_cuda_gpu_arch_string()}'
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/custom_ops.py", line 60, in _get_cuda_gpu_arch_string
    raise RuntimeError('No GPU devices found')
RuntimeError: No GPU devices found

Click to show setup for Python 3.7, CUDA 11.0.221, PyTorch 1.7.1, TensorFlow 1.14.0

conda create -n styleclip3
conda activate styleclip3
conda install -y "python<3.8" -c conda-forge # Python 3.7.10 (restricted by TensorFlow 1.x dependency)
conda install -y pytorch==1.7.1 torchvision "cudatoolkit<11.2" -c pytorch
# PyPi tensorflow-gpu package is built for CUDA 10, incompatible with 11, use conda-forge community package
conda install "tensorflow-gpu<2" -c conda-forge # 1.14.0
pip install git+https://github.com/openai/CLIP.git # forces pytorch 1.7.1 install
pip install pandas requests opencv-python matplotlib scikit-learn gdown
gdown https://drive.google.com/u/0/uc?id=1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT&export=download
git clone https://github.com/omertov/encoder4editing.git
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/louis/miniconda3/envs/styleclip3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
Setting up TensorFlow plugin "fused_bias_act.cu": Failed!
Traceback (most recent call last):
  File "GetCode.py", line 284, in <module>
    GetCode(Gs,random_state,num_img,num_once,dataset_name)
  File "GetCode.py", line 109, in GetCode
    dlatent_avg=Gs.get_var('dlatent_avg')
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 396, in get_var
    return self.find_var(var_or_local_name).eval()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 391, in find_var
    return self._get_vars()[var_or_local_name] if isinstance(var_or_local_name, str) else var_or_local_name
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 297, in _get_vars
    self._vars = OrderedDict(self._get_own_vars())
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 286, in _get_own_vars
    self._init_graph()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 151, in _init_graph
    out_expr = self._build_func(*self._input_templates, **build_kwargs)
  File "<string>", line 187, in G_main
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 232, in input_shape
    return self.input_shapes[0]
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 219, in input_shapes
    self._input_shapes = [t.shape.as_list() for t in self.input_templates]
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 267, in input_templates
    self._init_graph()
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/network.py", line 151, in _init_graph
    out_expr = self._build_func(*self._input_templates, **build_kwargs)
  File "<string>", line 491, in G_synthesis_stylegan2
  File "<string>", line 455, in layer
  File "<string>", line 99, in modulated_conv2d_layer
  File "<string>", line 68, in apply_bias_act
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 72, in fused_bias_act
    return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain, clamp=clamp)
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 132, in _fused_bias_act_cuda
    cuda_op = _get_plugin().fused_bias_act
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/ops/fused_bias_act.py", line 18, in _get_plugin
    return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu')
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/custom_ops.py", line 139, in get_plugin
    compile_opts += f' --gpu-architecture={_get_cuda_gpu_arch_string()}'
  File "/home/louis/dev/cv/StyleCLIP/global/dnnlib/tflib/custom_ops.py", line 60, in _get_cuda_gpu_arch_string
    raise RuntimeError('No GPU devices found')
RuntimeError: No GPU devices found

Requiring packages come from the anaconda channel instead for some reason is enforcing TensorFlow 1.10.0, which (once you get past some interface changes) still leads to the same GPU registration problem.

Click to show setup for Python 3.6, CUDA 11.0.221, PyTorch 1.7.1, TensorFlow 1.10.0 (anaconda channel)

conda create -n styleclip6
conda activate styleclip6
conda install -y "python<3.7" -c anaconda # Python 3.6.13 (restricted by TensorFlow 1.x dependency)
conda install -y pytorch==1.7.1 torchvision "cudatoolkit>=11,<11.2" -c pytorch
# PyPi tensorflow-gpu package is built for CUDA 10, incompatible with 11, use conda-forge community package
conda install "tensorflow-gpu<2" -c anaconda # 1.10.0
pip install git+https://github.com/openai/CLIP.git # forces pytorch 1.7.1 install
pip install pandas requests opencv-python matplotlib scikit-learn gdown
gdown https://drive.google.com/u/0/uc?id=1EM87UquaoQmk17Q8d5kYIAHqu0dkYqdT&export=download
git clone https://github.com/omertov/encoder4editing.git

CUDA can also be downloaded from conda-forge but upon installing pytorch, the package is superseded by the higher-priority cudatoolkit package in that channel (making it equivalent to the attempt above which failed)

I'm out of ideas so giving up at this point, please let me know if there's a solution!

code to compute fs3

Thanks for your awesome works!.
I have a new GAN model. Where code to compute fs3.

How to encode existing image?

The README suggests using e2e for encoding existing face image, but this repo seems to be empty. I tried using stylegan-encoder instead, but it is based on Tensorflow and I got stuck with downloading the model with "Google Drive quota exceeded" error. Any suggestions for what to use for StyleGAN encoding?

Typos in "Optimization" notebook arguments

Great work! Thank you for sharing it.

I found the following typos in the "Optimization" notebook:

  • in the "Additional arguments" cell, the parameter size should be stylegan_size.
  • and, in the next cell, instead of from main import main it should be from optimization.run_optimization import main, I think.

Other than that, it worked perfectly :)

CUDA out of memory when using own latent

Hi, great repo, love the idea.

I have an issue with CUDA memory in the Google Colab playground notebook:
CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 14.76 GiB total capacity; 13.61 GiB already allocated; 75.75 MiB free; 13.76 GiB reserved in total by PyTorch)

It happens at the start of the optimization process.
I have used the recommended e4e to invert my own image.
If relevant: the size of the latent is torch.Size([18, 512]).

Is there any easy way to reduce the memory footprint of the notebook?

Thanks in advance.

Latent Mapper Training

Incredible work done here - congratulations on the fantastic results.

Is there a timeline for code for training the latent mapper technique by chance? Happy to pitch in on development work as well on that front if necessary.

OOM with 10GB of VRAM at GetCode.py

I'm using a RTX 3080 with 10 GB of VRAM, and when I execute
python GetCode.py --dataset_name ffhq --code_type s_mean_std
I receive a Resource Exhausted Exception, saying that I'm out of Memory.

Is there anything I can do about this?

Thanks for your amazing work!

Dependencies are incompatible

I ran into issues installing all of the dependencies, so I tried to do it all at once. I'm on Windows. Here's the result, does anyone know what I'm doing wrong:

> conda create --prefix ./envs -c pytorch -c anaconda -c conda-forge pytorch=1.7.1 torchvision cudatoolkit=11 tensorflow-gpu==1.15
[...Anaconda processing conflicts...]
UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package python conflicts for:
torchvision -> python[version='>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.9,<3.10.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0']
tensorflow-gpu==1.15 -> tensorflow=1.15.0 -> python[version='3.6.*|3.7.*']
pytorch=1.7.1 -> dataclasses -> python[version='3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3.6,<3.7|>=3.7|>=3.5,<3.6.0a0|>=3.5|3.9.*']
pytorch=1.7.1 -> python[version='>=3.6,<3.7.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0|>=3.7,<3.8.0a0']
torchvision -> numpy[version='>=1.11'] -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|3.9.*|3.4.*']

Package pytorch conflicts for:
torchvision -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0|>=1.0.0|>=0.4']
pytorch=1.7.1

Package cudatoolkit conflicts for:
torchvision -> cudatoolkit[version='>=10.0,<10.1|>=10.1,<10.2|>=10.2,<10.3|>=11.1,<11.2|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1']
torchvision -> pytorch[version='>=1.0.0'] -> cudatoolkit[version='>=8.0,<8.1']
cudatoolkit=11

Load other images, but return ValueError

I tried it with my pictures but failed all, and it returns: ValueError: operands could not be broadcast together with shapes (1,1,0) (1,1,512) (1,1,0), is my picture different from demo images? My 2 color pictures include a cat and a human face, both same size 1024*1024.

License

please add a license file

[Question] How to generate images without optimization

Awesome paper! It is an excellent result.
I watched the youtube video that you uploaded yesterday.
In the demo, it generates images in real-time without optimization.
When I experimented with your colab, I had to optimize it for each text.
How do you generate images like the youtube demo?
If possible, it would be great if you could publish the code of the youtube demo.

[Feature Request] Generate Video of Rendered Frames, Included FFMPEG Command

I'm sure there's a more elegant way to implement this but my personal solution for rendering a nice video of the frames is to use ffmpeg as installed on Colab and its motion interpolation to generate a smooth video. It may be faster to just save more frames during the process, it may also end up jerkier if it goes back and forth at any point, but I haven't tested these things. I just set it and left it as is on my personal copy.

!ffmpeg -pattern_type glob -r 5 -i './results/*.png' -filter:v "minterpolate='fps=15:mb_size=8:search_param=8:vsbmc=0:scd=none:mc_mode=obmc:me_mode=bidir'" -vcodec libx264 -crf 28 -pix_fmt yuv420p out.mp4

It takes the frames as 5fps input and interpolates it 3x to 15fps and saves it in a reasonable quality MP4 with a codec and pixel format that is supported by most devices and websites, as opposed to the output it gives without specifying vcodec and pix_fmt and only tell it to output an mp4. Doing it that way causes issues on Twitter, Tumblr, and some devices just won't play it, so I'd recommend leaving those parts in.

Perhaps the prompt could be the output name like "$description.mp4" but with some measure taken to truncate long descriptions.

Training on my own dataset?

Hello team,
Thank your for this amazing project.
I'd like to train StyleCLIP on own my dataset. I have the images already. What other data do I have to prepare? Could you please let me know the detail steps to train it?

How to invert and edit an image

Hello,

I looked at the codes but couldn't locate the code that inverts the image so it can be edited. You said you used e4e, but I couldn't find the relevant codes.

Thank you.

Command errored out with exit status 128: git clone -q https://github.com/openai/CLIP.git

(base) liukai@liukaideMacBook-Pro Downloads % pip install git+https://github.com/openai/CLIP.git
Collecting git+https://github.com/openai/CLIP.git
Cloning https://github.com/openai/CLIP.git to /private/var/folders/kr/v9ktnnhn5f11s9pp5977pwqr0000gn/T/pip-req-build-jq1hz_wh
ERROR: Command errored out with exit status 128: git clone -q https://github.com/openai/CLIP.git /private/var/folders/kr/v9ktnnhn5f11s9pp5977pwqr0000gn/T/pip-req-build-jq1hz_wh Check the logs for full command output.

Anyone can help, thanks.

Problems with GetCode's tensorflow environment

I encountered a strange problem when running GetCode.py related content in the code. tensorflow version 1.x seems to be incompatible with other environments, how should I configure the environment variables here?

Custom model global directions (Manipulator, fs3)

Really impressed with the work on ffhq!

For models trained with custom data, I discovered that only global directions is supported (requires no training). But what do I need to alter to:
M=Manipulator(dataset_name='ffhq')
fs3=np.load('./npy/ffhq/fs3.npy')

Are the settings for ffhq usable for inference/manipulation of custom models?

Thanks a lot!

Colab OOM error when trying to use `latent_path` in the "Optimization" notebook

As discussed in a previous issue (#1), and suggested by @woctezuma I used this notebook to project a face image for editing.

I saved the latent with torch.save(latent, 'latent.pt') and then uploaded it to the Optmization notebook environment. It loads fine, and the shape seems correct (torch.Size([18, 512]), I added some prints in the run_optimization.py) but I keep getting the following error:

RuntimeError: CUDA out of memory. Tried to allocate 1.12 GiB (GPU 0; 15.90 GiB total capacity; 14.74 GiB already allocated; 221.75 MiB free; 14.88 GiB reserved in total by PyTorch)

which I don't understand, since it should be pretty much the same whether I use a random latent or the one I have saved, or at least that's what I understand by looking at the code. Any ideas?

I'm attaching the latent, just in case: latent.zip

Thank you :)

Google Drive - Can't Download Weights File 'e4e_ffhq_encode.pt'

When executing the 'e4e' cell in the google colab (global) I get the following error:

Permission denied: https://drive.google.com/uc?id=1O8OLrVNOItOJoNGMyQ8G8YRTeTYEfs0P
Maybe you need to change permission over 'Anyone with the link'?

I tried manually downloading the file using gdown and got the following, more descriptive error:

Too many users have viewed or downloaded this file recently. Please
try accessing the file again later. If the file you are trying to
access is particularly large or is shared with many people, it may
take up to 24 hours to be able to view or download the file. If you
still can't access a file after 24 hours, contact your domain administrator.

I can't continue afterwards and the only solution I currently have is to download the file to my laptop and then upload it to google colab, which is painfully slow.
EDIT: This didn't work. Apparently the file upload got interrupted, which means I can't continue at all

Please consider hosting the weights file on a site that is made for filehosting and is more widely accessible or a private website.

[Errno 2] No such file or directory: 'stylegan2-ffhq-config-f.pt'

error in colab

[Errno 2] No such file or directory: 'stylegan2-ffhq-config-f.pt'

from main import main
from argparse import Namespace
result = main(Namespace(**args))


---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-4-fbc1a59be2c9> in <module>()
      1 from main import main
      2 from argparse import Namespace
----> 3 result = main(Namespace(**args))

3 frames
/usr/local/lib/python3.6/dist-packages/torch/serialization.py in __init__(self, name, mode)
    209 class _open_file(_opener):
    210     def __init__(self, name, mode):
--> 211         super(_open_file, self).__init__(open(name, mode))
    212 
    213     def __exit__(self, *args):

FileNotFoundError: [Errno 2] No such file or directory: 'stylegan2-ffhq-config-f.pt'

[Feature Request] Seeds for randomized initialization of free_generation

Simple:
Currently free_generation starts from the average face of the model and diverges from there, but I think it would be far more interesting if it could start with randomized faces when given a seed, like the default generate.py in StyleGAN2-ADA / run_generator.py in StyleGAN2.

Granular Control:
Given that random faces may be a bit too random, perhaps a control for how similar / dissimilar the initial faces should be by lerping with the dlatent_avg like truncation_psi does by default, but applied to the latent of the face itself and not passed to StyleGAN2 so that it can still modify faces with a truncation_psi of 1.0 or whatever else is desired.

Perhaps per layer or per layer range "faux truncation_psi" on the initial latents to allow for wild colors and features but a plain forward-facing head or any other combination of similar / dissimilar elements per layer. If simplified layer ranges are preferred, I'd recommend trying 0-5, 6-11, 12-17 as a starting point. These seem to be where facial shape switches to facial details switches to global coloration.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.