Code Monkey home page Code Monkey logo

crossattentioncontrol's Introduction

Cross Attention Control with Stable Diffusion

Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion, some modifications were made to the methods described in the paper in order to make them work with Stable Diffusion.

Paper: https://arxiv.org/abs/2208.01626
Official implementation: https://github.com/google/prompt-to-prompt

What is Cross Attention Control?

Large-scale language-image models (eg. Stable Diffusion) are usually hard to control just with editing the prompts alone and can be very unpredictable and unintuitive for users. Most existing methods require the user to input a mask which is cumbersome and might not yield good results if the mask has an inadequate shape. Cross Attention Control allows much finer control of the prompt by modifying the internal attention maps of the diffusion model during inference without the need for the user to input a mask and does so with minimal performance penalities (compared to clip guidance) and no additional training or fine-tuning of the diffusion model.

Getting started

This notebook uses the following libraries: torch transformers diffusers numpy PIL tqdm difflib
The last known working version of diffusers for the notebook is diffusers==0.4.1. A different version of diffusers might cause errors as this notebook injects code into the model and any code change from the diffusers library is likely to break compatibility. Simply install the required libraries using pip and run the jupyter notebook, some examples are given inside.
A description of the parameters are given at the end of the readme.

Alternatively there is this easy-to-follow colab demo by Lewington-pitsos: Open In Colab

Results/Demonstrations

All images shown below are generated using the same seed. The initial and target images must be generated with the same seed for cross attention control to work.

New: Image inversion

This method takes an existing image and finds its corresponding gaussian latent vector using a modified inverse DDIM process that keeps compatibility with other ODE schedulers such as K-LMS, then edits using prompt to prompt editing with cross attention control. A finite difference gradient descent method that corrects for high CFG values is also provided. It allows inversion with higher CFG values (eg. 3.0-5.0), while without it only CFG values below 3.0 are usable.

Middle: Original image
Top left: Reconstructed image using the prompt a photo of a woman with blonde hair
Clockwise: See InverseCrossAttention_Release.ipynb for the prompts in order.
Note that some fine tuning on the prompts have been done to make these images consistent. For example, when changing the hair color, sometimes the person starts smiling, which can be removed by adding a smile token in the prompt and adjust its weight downwards using cross attention control. Demo

Target replacement

Top left prompt: [a cat] sitting on a car
Clockwise: a smiling dog..., a hamster..., a tiger...
Note: different strength values for prompt_edit_spatial_start were used, clockwise: 0.7, 0.5, 1.0 Demo

Style injection

Top left prompt: a fantasy landscape with a maple forest
Clockwise: a watercolor painting of..., a van gogh painting of..., a charcoal pencil sketch of...
Demo

Global editing

Top left prompt: a fantasy landscape with a pine forest
Clockwise: ..., autumn, ..., winter, ..., spring, green
Demo

Reducing unpredictability when modifying prompts

Left image prompt: a fantasy landscape with a pine forest
Right image prompt: a winter fantasy landscape with a pine forest
Middle image: Cross attention enabled prompt editing (left image -> right image)
Demo

Left image prompt: a fantasy landscape with a pine forest
Right image prompt: a watercolor painting of a landscape with a pine forest
Middle image: Cross attention enabled prompt editing (left image -> right image)
Demo

Left image prompt: a fantasy landscape with a pine forest
Right image prompt: a fantasy landscape with a pine forest and a river
Middle image: Cross attention enabled prompt editing (left image -> right image)
Demo

Direct token attention control

Left image prompt: a fantasy landscape with a pine forest
Towards the right: -fantasy Demo

Left image prompt: a fantasy landscape with a pine forest
Towards the right: +fantasy and +forest Demo

Left image prompt: a fantasy landscape with a pine forest
Towards the right: -fog Demo

Left image: from previous example
Towards the right: -rocks Demo

Comparison to standard prompt editing

Let's compare our results above where we removed fog and rocks from our fantasy landscape using cross attention maps against what people usually do, by editing the prompt alone.
We can first try adding "without fog and without rocks" to our prompt.

Image prompt: A fantasy landscape with a pine forest without fog and without rocks
However, we still see fog and rocks.
Demo

We can try adding words like dry, sunny and grass.
Image prompt: A fantasy landscape with a pine forest without fog and rocks, dry sunny day, grass
There are less rocks and fog, but the image's composition and style is completely different from before and we still haven't obtained our desired fog and rock-free image...
Demo

Usage

Two functions are included, stablediffusion(...) which generates images and prompt_token(...) that is used to help the user find the token index for words in the prompt, which is used to tweak token weights in prompt_edit_token_weights.

Parameters of stabledifusion(...):

Name = Default Value Description Example
prompt="" the prompt as a string "a cat riding a bicycle"
prompt_edit=None the second prompt as a string, used to edit the first prompt using cross attention, set None to disable "a dog riding a bicycle"
prompt_edit_token_weights=[] values to scale the importance of the tokens in cross attention layers, as a list of tuples representing (token id, strength), this is used to increase or decrease the importance of a word in the prompt, it is applied to prompt_edit when possible (if prompt_edit is None, weights are applied to prompt) [(2, 2.5), (6, -5.0)]
prompt_edit_tokens_start=0.0 how strict is the generation with respect to the initial prompt, increasing this will let the network be more creative for smaller details/textures, should be smaller than prompt_edit_tokens_end 0.0
prompt_edit_tokens_end=1.0 how strict is the generation with respect to the initial prompt, decreasing this will let the network be more creative for larger features/general scene composition, should be bigger than prompt_edit_tokens_start 1.0
prompt_edit_spatial_start=0.0 how strict is the generation with respect to the initial image (generated from the first prompt, not from img2img), increasing this will let the network be more creative for smaller details/textures, should be smaller than prompt_edit_spatial_end 0.0
prompt_edit_spatial_end=1.0 how strict is the generation with respect to the initial image (generated from the first prompt, not from img2img), decreasing this will let the network be more creative for larger features/general scene composition, should be bigger than prompt_edit_spatial_start 1.0
guidance_scale=7.5 standard classifier-free guidance strength for stable diffusion 7.5
steps=50 number of diffusion steps as an integer, higher usually produces better images but is slower 50
seed=None random seed as an integer, set None to use a random seed 126794873
width=512 image width 512
height=512 image height 512
init_image=None init image for image to image generation, as a PIL image, it will be resized to width x height PIL.Image()
init_image_strength=0.5 strength of the noise added for image to image generation, higher will make the generation care less about the initial image 0.5

Parameters of inversestabledifusion(...):

Name = Default Value Description Example
init_image the image to invert PIL.Image("portrait.png")
prompt="" the prompt as a string used for inversion "portrait of a person"
guidance_scale=3.0 standard classifier-free guidance strength for stable diffusion 3.0
steps=50 number of diffusion steps used for inversion, as an integer 50
refine_iterations=3 inversion refinement iterations for high CFG values, set to 0 to disable refinement when using lower CFG values, for higher CFG values, consider increasing it. Higher values slow down the algorithm significantly. 3
refine_strength=0.9 initial strength value for the refinement steps, the internal strength is adaptive 0.9
refine_skip=0.7 how many diffusion steps of refinement are skipped (value between 0.0 and 1.0), there is usually no need to refine earlier diffusion steps as CFG is not very important in lower time steps, higher values will skip even more steps 0.7

crossattentioncontrol's People

Contributors

apolinario avatar bloc97 avatar lewington-pitsos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

crossattentioncontrol's Issues

An observation

Hi, thanks for the code.
I have observed that in the examples you have provided, even if I just directly use the cross attention from the edited prompt by commenting out the line "attn_slice = attn_slice * (1 - self.last_attn_slice_mask) + new_attn_slice * self.last_attn_slice_mask", I get the same result for most of the cases. I checked for the cases where the words are replaced or new phrases like ' in winter' is added. So, it seems like the cross attention editing is not having any effect. Please comment on this. Thanks.

AttributeError: 'dict' object has no attribute 'sample'

Dear @bloc97 ,

Thank you for your great implementation. I really like it.

When I run the codes, it report error:

  File "/cs/labs/danix/wuzongze/diffusion_manipulation/CrossAttentionControl/test1", line 280, in <module>
    img=stablediffusion("A fantasy landscape with a pine forest, trending on artstation", prompt_edit_token_weights=[(2, -3)], seed=2483964025, width=768)

  File "/cs/labs/danix/wuzongze/anaconda3/envs/ldm/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)

  File "/cs/labs/danix/wuzongze/diffusion_manipulation/CrossAttentionControl/test1", line 213, in stablediffusion
    noise_pred_uncond = unet(latent_model_input, t, encoder_hidden_states=embedding_unconditional).sample

AttributeError: 'dict' object has no attribute 'sample'

it seems the output of unet is a dict, rather than a class. Am I the only one meet this prblem?

Best Wishes,

Zongze

Can't run the notebook in Google Colab, some issues with versions.

Some issues with LMSDiscreteScheduler and new_attention, it requires now sequence_length and dim
def new_attention(self, query, key, value, sequence_length, dim):
but diffusers/models/attention.py calling
hidden_states = self._attention(query, key, value)

[/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py](https://localhost:8080/#) in forward(self, hidden_states, context)
    196     def forward(self, hidden_states, context=None):
    197         hidden_states = hidden_states.contiguous() if hidden_states.device.type == "mps" else hidden_states
--> 198         hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states
    199         hidden_states = self.attn2(self.norm2(hidden_states), context=context) + hidden_states
    200         hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py](https://localhost:8080/#) in forward(self, hidden_states, context, mask)
    268 
    269         if self._slice_size is None or query.shape[0] // self._slice_size == 1:
--> 270             hidden_states = self._attention(query, key, value)
    271         else:
    272             hidden_states = self._sliced_attention(query, key, value, sequence_length, dim)

TypeError: new_attention() missing 2 required positional arguments: 'sequence_length' and 'dim'

Relating to the recent paper about 'Self-guidance' method

Hello @bloc97,

Your work has been instrumental in my understanding of the topic, especially since I encountered some difficulties when trying to run the official prompt to prompt code.

Recently, I've been engrossed in a paper titled "Diffusion Self-Guidance for Controllable Image Generation" (https://dave.ml/selfguidance/), where the authors introduce a novel 'Self Guidance' method. This technique edits an image by manipulating the attention maps, and I notice its resemblance to the 'Prompt to Prompt' method.

As an undergraduate student eager to delve deeper into the realm of Computer Vision, I'm interested in implementing this 'Self Guidance' method for my project. However, as of now, the authors have not released their official code. Hence I'm considering implementing that self guidance method upon the foundation of your code.

Given your expertise in this area, I was wondering if you think it's feasible to implement the 'Self Guidance' method based on your code? Any insights or suggestions you could provide would be immensely appreciated.

The differences from the official implementation?

Hi developers, thank you for completing this wonderful re-implementation.
As I am checking the differences between this repo and the original one, I noticed that the original repo also implemented stable diffusion.

I am wondering if you would like to list the additional features, those exclusive in this repo, on the README page. Will appreciate your clarification very much!

[question] I'd like to help contribute, but my knowledge of how diffusors work is lacking...

Where did you learn this stuff? There are a lot of "for dummies" level explanations about how diffusors work, and there's a fair amount of documentation out there that assumes you have am understanding of the inner workings of them, but there doesn't seem to be much in between. I would like to, for instance, add k-diffusion to this notebook, but I'm really confused as to how to get started. Is there any kind of reading material out there that I could use to familiarize myself with this stuff a bit more?

Notebook error

Everything runs but I am getting an error when running
stablediffusion("A fantasy landscape with a pine forest, trending on artstation", seed=2483964025, width=768)

The error says

AttributeError Traceback (most recent call last)
in
1 prompt_token("A fantasy landscape with a pine forest, trending on artstation", 7)
----> 2 stablediffusion("A fantasy landscape with a pine forest, trending on artstation", seed=2483964025, width=768)
3 stablediffusion("A fantasy landscape with a pine forest, trending on artstation", prompt_edit_token_weights=[(2, -3)], seed=2483964025, width=768)
4 stablediffusion("A fantasy landscape with a pine forest, trending on artstation", prompt_edit_token_weights=[(2, -8)], seed=2483964025, width=768)
5 stablediffusion("A fantasy landscape with a pine forest, trending on artstation", "A fantasy landscape with a pine forest, trending on artstation", prompt_edit_token_weights=[(2, 2), (7, 5)], seed=2483964025, width=768)

2 frames
/usr/local/lib/python3.7/dist-packages/diffusers/schedulers/scheduling_lms_discrete.py in add_noise(self, original_samples, noise, timesteps)
260 sigmas = self.sigmas.to(original_samples.device)
261 schedule_timesteps = self.timesteps.to(original_samples.device)
--> 262 timesteps = timesteps.to(original_samples.device)
263 if isinstance(timesteps, torch.IntTensor) or isinstance(timesteps, torch.LongTensor):
264 deprecate(

AttributeError: 'int' object has no attribute 'to'

Question about original google implementation with stable diffusion

Hi bloc, firstly thank you for your great work!
I've been spending a lot of time trying to implement google's original release into a custom pipeline with diffusers. I figured it wouldn't be too difficult as they have an example there running with SD that looks pretty good. Although I'm getting very strange results even though everything seems to be in working order. I was considering that it may be because I had been using SD1.5 whereas they had been using 1.4, but I don't think there were any changes in architecture that would be causing that?

Could you elaborate a bit more on the changes you made to get it to work with stable?

Can't install dependencies

I'm in an Anaconda prompt (on Windows). Installing required packages doesn't work:

(base) C:\Users\andre>pip install torch transformers diffusers numpy PIL tqdm difflib
Collecting torch
  Using cached torch-1.12.1-cp39-cp39-win_amd64.whl (161.8 MB)
Collecting transformers
  Using cached transformers-4.21.3-py3-none-any.whl (4.7 MB)
Collecting diffusers
  Using cached diffusers-0.3.0-py3-none-any.whl (153 kB)
Collecting numpy
  Downloading numpy-1.23.3-cp39-cp39-win_amd64.whl (14.7 MB)
     |████████████████████████████████| 14.7 MB 6.4 MB/s
ERROR: Could not find a version that satisfies the requirement PIL (from versions: none)
ERROR: No matching distribution found for PIL

How to make image inversion more precise?

Fantastic work on this project @bloc97!

I'm able to get super impressive results with prompt editing. However, when doing img2img I find that the results degrade greatly. For example, here I'm editing the prompt to change to a charcoal drawing, which works well. However, if I pass in the initial image generated from the original prompt, there's no values of parameters I can find to get anywhere close to the quality of the prompt edit without initial image. I'm observing similar issues to stock SD where either the macro structure of the initial image is lost or the prompt edit has little to no effect.

The reason I want this is to edit real images and to build edits on top of each other. I realize this may be unsolved, and depend on how well the network understands the scene content, but I'm very interested in your thoughts and suggestions here as I think it would be incredibly powerful.

img_original = stablediffusion(
    prompt="a fantasy landscape with a maple forest",
    steps=50,
    seed=42,
)

img_prompt_edit = stablediffusion(
    prompt="a fantasy landscape with a maple forest",
    prompt_edit="a charcoal sketch of a fantasy landscape with a maple forest",
    steps=50,
    seed=42,
)

img_init_image = stablediffusion(
    prompt="a fantasy landscape with a maple forest",
    prompt_edit="a charcoal sketch of a fantasy landscape with a maple forest",
    steps=50,
    seed=42,
    init_image=img_original,
    init_image_strength=0.6,
)

image
image
image

Add direct target editing to the notebook

It would be great to have a direct target editing in the notebook examples. Something like:
obraz

Code for this:

#https://lexica.art/prompt/2127efd3-e23b-44dc-baac-494993bc9688
image = stablediffusion("A photo of a Corgi dog riding a bike in Times Square wearing sunglasses and beach hat, cinestill, 800t, 35mm, full-HD",
                        seed=2401809524,
                        guidance_scale=7,
                        steps = 150)
image
prompt = "A photo of a teddy bear riding a bike in Times Square wearing sunglasses and beach hat, cinestill, 800t, 35mm, full-HD"

print(prompt_token(prompt, 5), prompt_token(prompt, 6))

stablediffusion(prompt,
                        seed=2401809524,
                        guidance_scale=7,
                        prompt_edit_token_weights = [(5, 5), (6, 5)],
                        init_image=image,
                        steps=150)

Implement with Stable Diffusion repositories

Hello, thanks for the implementation! It works very well.

As a suggestion, it may be helpful to provide code that works with the broader Github community. While the Diffusers library does allow for better ease of use and a more streamlined experience, it can possibly hinder the freedom to use across similar implementations due to how their library works.

negative weighting

does negative weighting work properly?
most repos that have weighting don't have the option for negative weights,
does it work in this one?
thanks

About the finite difference gradient descent method

Hi @bloc97 ,

Thanks for your great work.

Do you know any other papers/implementations using the finite difference gradient descent to do inversion?
I want more references for this solution.

Also, could you please give more hints about the magic number tless?

About terms["nll"]

Thanks for your great work. In line 633 of gaussian_diffusion.py, terms["nll"] is calculated but not used. Whther it is a mistake, or whether it doesn't work.
terms["nll"] = self._token_discrete_loss(model_out_x_start, get_logits, input_ids_x, mask=input_ids_mask, truncate=True, t=t)
terms["loss"] = terms["mse"] + decoder_nll + tT_loss

Better support for prompt_edit_token_weights parsing

Instead of counting indices for tokens to pass into prompt_edit_token_weights, it would be easier to reference it by 'word'.
parse_edit_weights converts weights with words and word list, in addition to int indices to weights with int indices:

prompt = 'the quick brown fox jumps over the lazy dog'
parse_edit_weights(prompt, None, [('brown', -1), (2, 0.5), (['lazy', 'dog'], -1.5)])

returned result is [(3, -1), (2, 0.5), (8, -1.5), (9, -1.5)].

Here's the code:

def sep_token(prompt):
    tokens = clip_tokenizer(prompt, padding="max_length", max_length=clip_tokenizer.model_max_length, truncation=True, return_tensors="pt", return_overflowing_tokens=True).input_ids[0]
    words = []
    index = 1
    while True:
        word = clip_tokenizer.decode(tokens[index:index+1])
        if not word: break
        if word == '<|endoftext|>': break
        words.append(word)
        index += 1
        if index > 500: break
    return words

def parse_edit_weights(prompt, prompt_edit, edit_weights):
    if prompt_edit:
        tokens = sep_token(prompt_edit)
    else:
        tokens = sep_token(prompt)
    
    prompt_edit_token_weights=[]
    for tl, w in edit_weights:
        if isinstance(tl, list) or isinstance(tl, tuple):
            pass
        else:
            tl = [tl]
        for t in tl:
            try:
                if isinstance(t, str):
                    idx = tokens.index(t) + 1
                elif isinstance(t, int):
                    idx = t
                prompt_edit_token_weights.append((idx, w))
            except ValueError as e:
                print(f'error {e}')
            
    return prompt_edit_token_weights

Question about the code in CrossAttention_Release.ipynb

Hello,
Thank you for sharing your awesome code! :)

I have a question about this line:
latent_model_input = (latent_model_input / ((sigma**2 + 1) ** 0.5)).to(unet.dtype)

Could you give me some explanations about the reason that "(simga**2+1)**0.5" is needed ?

Thanks.

[feature request] Automatically print token being modified (and by how much) when generating an image

I was getting some nonsensical results for a little while until it occurred to me that some words are multiple tokens (and punctuation, etc, are tokens as well).

Here's some code I'm using to do it:

        #Process prompt editing
        if prompt_edit is not None:
            tokens_conditional_edit = clip_tokenizer(prompt_edit, padding="max_length", max_length=clip_tokenizer.model_max_length, truncation=True, return_tensors="pt", return_overflowing_tokens=True)
            embedding_conditional_edit = clip(tokens_conditional_edit.input_ids.to(device)).last_hidden_state
            
            init_attention_edit(tokens_conditional, tokens_conditional_edit)
            
            #My code starts here
            for t in prompt_edit_token_weights:
                token_word = prompt_token(prompt_edit, t[0])
                print(f"{token_word}: {t[1]}")
        else:
            for t in prompt_edit_token_weights:
                token_word = prompt_token(prompt, t[0])
                print(f"{token_word}: {t[1]}")

Probably ought to check and make sure I'm not misunderstanding anything, but it appears to work.

Did you get a same result?

Hello.

I have another question.

From your code, I tried to reproduce the results in the prompt-to-prompt.

However, I got a result as below:
question

Did you get an identical result?

How do I set the parameters to get the results in the prompt-to-prompt paper?

And,

Please let me know why you divided latent with "0.1825 before inserting it to VAE.

Thanks :) !!

Edit to README.md

Image prompt: A fantasy landscape with a pine forest without fog and without rocks
However, we still see fog and rocks.

That's not how SD works.
You can't type a prompt: "not a woman without a crown not wearing a red dress" and expect SD to follow it.
You will 100% get "a woman, a crown, wearing red dress"

If you want to have a forest without fog and rocks just don't make these words a part of the prompt or specify alternative words.
Like "A fantasy landscape with a pine forest with a shiny morning sun and with asphalt road".

Add some notes on running on Windows to readme

Got this running on windows. I had to do the following after setting up the python environment:

For the record, I did this with my existing environment from hlky's Stable Diffusion Webui, which can be found here: https://github.com/sd-webui/stable-diffusion-webui, so I didn't need to install the other packages because I already had them.

This isn't quite good enough to go into the readme yet because I didn't install from a blank environment, but maybe other windows users can use this info and some instructions can be assembled.

Implementing Dreambooth weights

Is it possible to use a trained dreambooth model into cross attention control?
I trained a model in Dreambooth-Stable-Diffusion on a new car and I have an image where I want to change the car to the one I trained in Dreambooth.
Changing 'model_path_diffusion' to the downloaded weights of Dreambooth does not seem to work, it does not generate the new car but something totaly different.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.