deepfakes / faceswap Goto Github PK
View Code? Open in Web Editor NEWDeepfakes Software For All
Home Page: https://www.faceswap.dev
License: GNU General Public License v3.0
Deepfakes Software For All
Home Page: https://www.faceswap.dev
License: GNU General Public License v3.0
I think it would be nice to have web UI for this project. It may be very helpful when running the script in a cloud, but also it may be convenient for local usage. Users can set up everything by executing one simple docker run
/nvidia-docker run
command which will start web ui server.
I made a few sketches to better understand the idea: https://www.figma.com/file/LCHSW0lMj8OAUo8dLOljPc9b/Deepfakes
What do you think about it?
That script is pure gold, and it would be great if we could create a one-click program to achieve this noble goal.
The link is here: https://gist.github.com/anonymous/8f468972d6286f403318c94bc6dbe382
I have had some success hacking together a pre-processing script to run over my training images. It uses dlib.chinese_whispers_clustering to group the found faces in the training data based on likeness. I think one of the keys to good results is good training sets, and this helps to prevent polluting the training data with other peoples faces as tends to be the case with Google image search sets or images with multiple faces.
There are a couple of ways I think this could be integrated into the project:
Here's the script, sorry its a bit hacky, I just wanted something that worked and haven't cleaned it up. I'm not sure where I would begin to integrate it into the project, perhaps as an alternative plugin?
It shows that error with no picture output!
contrib/shape_predictor_68_face_landmarks.dat file not found.
Landmark file can be found in http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
Unzip it in the contrib/ folder.
OpenCV Error: Assertion failed (ssize.width > 0 && ssize.height > 0) in resize, file /io/opencv/modules/imgproc/src/resize.cpp, line 4044
Failed to extract from image:~path~/~photoname~.jpg. Reason: /io/opencv/modules/imgproc/src/resize.cpp:4044: error: (-215) ssize.width > 0 && ssize.height > 0 in function resize
I don't know why and it still shows error with another computer in linux.
Dockerfile does contains basic dependencies to run the program.
It creates bugs with os.scandir
or mkdir
parameter exist_ok
Please feel free to improve it and provide better support for the program.
Please take care of CPU only users if you update Tensorflow. Provide a Dockerfile.gpu for optimised usages.
Please provide here link to folder of photographs you want to share with other.
I have already installed pathlib in python3.6:Requirement already satisfied: pathlib in /usr/local/lib/python3.6/dist-packages
Command executed: python3 faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump
Traceback (most recent call last):
File "faceswap.py", line 3, in
from scripts.extract import ExtractTrainingData
File "/home/ubuntu/data/faceswap/scripts/extract.py", line 2, in
from lib.cli import DirectoryProcessor
File "/home/ubuntu/data/faceswap/lib/cli.py", line 6, in
from lib.utils import get_image_paths, get_folder, load_images, stack_images
File "/home/ubuntu/data/faceswap/lib/utils.py", line 4, in
from pathlib import Path
ImportError: No module named pathlib
Can anyone help me out with this issue?
If anyone here is interested I've created a simple desktop app w/ GUI to distribute the deepfakes toolkit without the need to install python or other dependencies. Here is a screenshot of it working. The download and more info are in this thread.
Currently this app runs on the original scripts because I wasn't aware of this repo, but I think I'm eventually going to migrate it to the improved scripts here.
The current iteration of the tool will simply error out or quit if the model and output directories do not exist. To improve this behavior for new users, we can prompt to create these directories automatically.
I'll try to tackle that one with the simple cv2 support
Hello everyone. More than a week i am trying to run this project, but i am having troubles.. i even deleted my python and pycharm to reinstall in again, but again no success. Right now, i can't install tensorflow-pgu package. it says "binascii.Error: Incorrect padding". Before was trouble with dlib, but reinstall python and pycharm helped with this.
I am praying someone to help with installation manual, step by step... thank u a lot! i will not give up, this project is awesome!
when i train using above command, it doesn't start training and just print the positional arguments and optional arguments message ,then it stop and receive no error.
do you know why? @joshua-wu @Clorr
Ensure that all dependencies are met prior to execution and provide helpful messages to the user in case they are missing.
The Dockerfile is now cleaner, but still relying on Python2 dependencies.
It would be great to move everything to Python3 so we get rid of legacy, and can take advantage of Python3 features.
I'm not a regular Python user so I'm not sure on what to do. I can try to tackle that at some point but for now, I'm prefer to focus on new features and algo improvments....
Had been training the model from past 36 hours on MAC OS HIGH SIERRA.
Now the issue I am facing is when I press "q" to stop the training process, nothing happens, it's continuing to train the model.
Please help as I don't want to terminate the program using ctrl+c to stop it as the trained model will also be lost and all my time/eletricity/etc will get wasted!
You latest commit brought me some new error, help me to play around and fix em please.
from lib.cli import TrainingProcessor
if not, where to put the unzip data into directory. sorry for asking newby questions.
i am using pycharm and docker. thanks
I have a question about gpu usage. I have ryzen 1600x and nvidia 1060. When i run learning mode cpu run about 30 % and gpu Core about 10 and 20% vram is full 5gb. Is this normal? Learning proces run few hours at same %. Thx for answer and sorry for my english.
Hi all,
This is not a bug. I am just having problems with my install. Ive installed everything in this github except dlib and face_recognition. I keep getting this when I try the pip install dlib:
-- *****************************************************************************************************
CMake Error at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:149 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (include)
-- Configuring incomplete, errors occurred!
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeOutput.log".
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeError.log".
error: cmake configuration failed!
Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... error
Complete output from command "c:\users\michael nguyen\appdata\local\programs\python\python35\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\MICHAE1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\MICHAE1\AppData\Local\Temp\pip-p539ppox-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
Detected Python architecture: 64bit
Detected platform: win32
Removing build directory C:\Users\MICHAE~1\AppData\Local\Temp\pip-build-bkwfh9da\dlib./tools/python/build
Configuring cmake ...
-- Building for: Visual Studio 14 2015
-- Selecting Windows SDK version to target Windows 10.0.16299.
-- The C compiler identification is MSVC 19.0.24210.0
-- The CXX compiler identification is MSVC 19.0.24210.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py34; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:66 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python-py35; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:68 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.10/Modules/FindBoost.cmake:1610 (message):
No header defined for python3; skipping header check
Call Stack (most recent call first):
C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:71 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
-- Could NOT find Boost
-- Found PythonLibs: C:/Users/Michael Nguyen/AppData/Local/Programs/Python/Python35/libs/python35.lib (found suitable version "3.5.4", minimum required is "3.4")
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of void*
-- Check size of void* - done
-- Enabling SSE4 instructions
-- Searching for BLAS and LAPACK
-- Searching for BLAS and LAPACK
-- Looking for pthread.h
-- Looking for pthread.h - not found
-- Found Threads: TRUE
-- A library with BLAS API not found. Please specify library location.
-- LAPACK requires BLAS
-- Found CUDA: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0 (found suitable version "8.0", minimum required is "7.5")
CMake Warning at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/CMakeLists.txt:535 (message):
You have CUDA installed, but we can't use it unless you put visual studio
in 64bit mode.
-- Disabling CUDA support for dlib. DLIB WILL NOT USE CUDA
-- C++11 activated.
-- *****************************************************************************************************
-- We couldn't find the right version of boost python. If you installed boost and you are still getting this error then you might have installed a version of boost that was compiled with a different version of visual studio than the one you are using. So you have to make sure that the version of visual studio is the same version that was used to compile the copy of boost you are using.
--
-- You will likely need to compile boost yourself rather than using one of the precompiled
-- windows binaries. Do this by going to the folder tools\build\ within boost and running
-- bootstrap.bat. Then run the command:
-- b2 install
-- And then add the output bin folder to your PATH. Usually this is the C:\boost-build-engine\bin
-- folder. Finally, go to the boost root and run a command like this:
-- b2 -a --with-python address-model=64 toolset=msvc runtime-link=static
-- Note that you will need to set the address-model based on if you want a 32 or 64bit python library.
-- When it completes, set the BOOST_LIBRARYDIR environment variable equal to wherever b2 put the
-- compiled libraries. You will also need to set BOOST_ROOT to the root folder of the boost install.
-- E.g. Something like this:
-- set BOOST_ROOT=C:\local\boost_1_57_0
-- set BOOST_LIBRARYDIR=C:\local\boost_1_57_0\stage\lib
--
-- Next, if you aren't using python setup.py then you will be invoking cmake to compile dlib.
-- In this case you may have to use cmake's -G option to set the 64 vs. 32bit mode of visual studio.
-- Also, if you want a Python3 library you will need to add -DPYTHON3=1. You do this with a statement like:
-- cmake -G "Visual Studio 14 2015 Win64" -DPYTHON3=1 ....\tools\python
-- Rather than:
-- cmake ....\tools\python
-- Which will build a 32bit Python2 module by default on most systems.
--
-- *****************************************************************************************************
CMake Error at C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/dlib/cmake_utils/add_python_module:149 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (include)
-- Configuring incomplete, errors occurred!
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeOutput.log".
See also "C:/Users/Michael Nguyen/AppData/Local/Temp/pip-build-bkwfh9da/dlib/tools/python/build/CMakeFiles/CMakeError.log".
error: cmake configuration failed!
----------------------------------------
Command ""c:\users\michael nguyen\appdata\local\programs\python\python35\python.exe" -u -c "import setuptools, tokenize;file='C:\Users\MICHAE1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\MICHAE1\AppData\Local\Temp\pip-p539ppox-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\MICHAE~1\AppData\Local\Temp\pip-build-bkwfh9da\dlib\
C:\Users\Michael Nguyen>
Anyone use windows 10 and know how to tackle this?
Improve documentation covering the following topics:
Use the convert command to convert a directory. convert_one_image
loads the model once.
Use the convert command to convert a directory. convert_one_image
loads the model every time that it is called.
The rectangle gives an artificial look at the generated image, so it would be a nice feature to use a more soft shape.
This page shows a couple of interesting tricks like: landmarks detection, hull detection, seamless cloning....
(Note: If needed, landmarks are already handled in the aligner class)
Can we include the 128x128 input/output size? to have better face resolution
I not a programmer, but found the latest dfaker and Faceswap-GAN have included it
please see below link
https://github.com/dfaker/df
https://github.com/shaoanlu/faceswap-GAN/blob/master/FaceSwap_GAN_v2_sz128_train.ipynb
Hope the New-GAN version will release soon. good luck and thanks to the team.
Please Which paper did the project depend on? I want to see the result.
PyInstaller or some other means to create pre-built packages for the most common OSes. Alternatively, look for other similar tools that would allow us to manage most of the dependencies. Perhaps Conda or others.
Target OSes:
For Linux users the manual setup with virtualenv or Dockerfile will most likely suffice.
A guy posted here a better train.py with thread support. If it is interesting, I can add it to the main repo
Source: https://www.reddit.com/r/deepfakes/comments/7nlql5/optimized_trainpy/
At this time, all the scripts will print help text after exiting in an unexpected fashion. For example, when cancelling a command, it will still print the help text before quitting.
Output:
[zoulock@zoulock-desktop faceswap]$ python3 faceswap.py train -A faceA -B faceB -m models -p
/usr/lib/python3.6/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float
to np.floating
is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type
.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
WARNING:tensorflow:From /usr/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1264: calling reduce_prod (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:1349: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
Model A Directory: /home/zoulock/faceswap/faceA
Model B Directory: /home/zoulock/faceswap/faceB
Training data directory: /home/zoulock/faceswap/models
Not loading existing training data.
Unable to open file (unable to open file: name = '/home/zoulock/faceswap/models/encoder.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Starting, this may take a while...
usage: faceswap.py [-h] {extract,train,convert} ...
positional arguments:
{extract,train,convert}
extract Extract the faces from a pictures.
train This command trains the model for the two faces A and
B.
convert Convert a source image to a new one with the face
swapped.
optional arguments:
-h, --help show this help message and exit
I request you to give steps by step instructions to work around with this project.
I am a JAVA and Nodejs guy but interested in AI/ML stuff and noob to python.
I am sure there are many people like me around the globe so I request you to give more details as for me env is perfectly setup but the project files when run are throwing errors.
Once I am handy with its working I will assist in this project with you all.
Thanks in advance.
As deepfakes suggested on Reddit Face alignment scripts based on 1adrianb/face-alignment works better.
It would be great if its added to project and pipeline. It maybe automatic but as deepfakes points out there might be bad aligned images needed to be deleted so maybe readme can be updated about how to use the script manually.
https://www.reddit.com/r/deepfakes/comments/7lae4c/face_alignment_scripts_based_on/
https://github.com/1adrianb/face-alignment
Add a filtering to keep only one recognized face
HI ,
can we add a plugin to merge frame to video with audio?
I'm trying to train the network with the example provided, and it crashes with this error:
"Resource exhausted: OOM when allocating tensor with shape"
Python reached 3.7GB, and my compute has 8GB, had a peak of 9.8GB used.
I'm using Tensor Flow on CPU, How Much RAM is needed ?
Tried installing the requirements-gpu.txt and get this error:
Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
I went here to troubleshoot the issue: tensorflow/tensorflow#8251
Installed Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu
Successfully uninstalled setuptools-28.8.0
Successfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0
Went back to my faceswap env to enter the requirements-gpu.txt and still get the same error:
(faceswap) C:\faceswap>pip install -r requirements-gpu.txt
Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )
No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))
Hi guys,
The more I think about it, the more I think complex arg parsing will be a problem.
Also if we want to move on to a GUI, it would be better to have a config file to set parameters durably.
We still can have an override for params through command line, so we can customize easily just the things we want. Something like ConfigArgParse
does this for example.
What do you think about it?
Please can anyone determine why this issue may be occurring when trained to very low amount (<0.02) , preview window looking fine but at convert / merge stage it turns into a mess?
Please forgive if this is an intrusion as strictly this is concerned with reddit.com/r/FakeApp, but i noticed user Clorr1's offer of helping out and pointing to this repo, which presumably fakeApp implements. I have seen several people with the issue on /r/fakeapp & r/deepfakes
[Links removed]
It's really frustrating because the training seems to be fine in the earlier stages (from early merge test runs) but once things start getting accurate in the preview window merge (now convert) turns the whole thing into zombie mode. So far i can't see anything in common between users like myself who have this issue. I have tried a lot of different troubleshooting steps but they are random tbh.
Downgrading graphics driver to one shipped with CUDA 8.0
switching A and B to match file format
running app in admin mode
uninstalling coincidentally installed python
I'm really sure about the quality of the celeb images there is nothing to indicate they are problematic, after all why does it start fine and end like crap with the preview window showing good results?
Thanks for any help
Nvidia 1070
Win 10
python faceswap.py extract
only extracts .jpg files, not .JPG
It would be great to facilitate use for non technical people. IE by making this an .exe with PyInstaller, or something similar. Any idea is welcome....
I added a first draft for this. It sometimes gives better results:
Note: In order to use it, you will have to DL the landmarks file from Dlib
http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
It'd be nice if the internal image size was easily configurable through the various steps to something other than 64x64. With upcoming improved plugins, or just someone willing to put in a lot more training time, going higher resolution seems inevitable.
I've tried hacking around a bit but I'm new to python and deep learning. I'm sure I'll get it eventually but this is probably a trivial change for someone familiar with these libraries.
Doesn't need to be piped all the way to the command line, but if it could be pulled up to changing a define it would be as easy a change for people as changing ENCODER_DIM is.
I try to load a trained model from FakeApp with "faceswap.py train ..." but I got this message
You are trying to load a weight file containing 7 layers into a model with 6 layers.
How can I fix this? I use this train setting in FakeApp
Model:
Processor: GPU
Layers: 4
Node: 512
Mem Ratio: default
GPU Growth: false
Thanks.
Adding a command line args parsing with an help would be great !
Preferably with argparse
I followed the guide
https://github.com/deepfakes/faceswap/blob/master/USAGE.md#testing-out-our-bot to test the trained model's output
python faceswap.py convert -i ~/faceswap/photo/trump/ -o ~/faceswap/output/ -m ~/faceswap/models/
but the results are like this:
Am i missed something? @joshua-wu @Ganonmaster
In its current state, the tool does not allow users to hit q
to quit the tool unless the preview window is active and has focus. The tool will only save after a certain amount of iterations, so it would be great if we could enable "non-previewers" or Docker users to quit out this way also and preserve their work..
Installing on windows is not as easy as it seems. You can go the hard way, that will require to compile some sources ad therefore install the copilation tools. Or you can try an easier way, but by being not totally up to date with the tools.
My experience on it is as follow:
scikit_image‑0.13.1‑cp36‑cp36m‑win_amd64.whl
filepip install scikit_image‑0.13.1‑cp36‑cp36m‑win_amd64.whl
pip install dlib==19.7.0
(this is not the latest, but it is precompiled)pip install -r .\requirements-gpu.txt
The requirements should go straightforward as the 2 big painful dependencies are already installed. If you get No matching distribution found for tensorflow-gpu
, it means you have the 32bit version!
Limit: 1823465472
InUse: 1823465472
MaxInUse: 1823465472
NumAllocs: 229
MaxAllocSize: 94452992
2018-01-28 16:25:05.010972: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\bfc_allocator.cc:277] ***************************************************************************************************x
2018-01-28 16:25:05.011266: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[2048]```
## Other relevant information
- **Operating system and version:** Windows 8.1
- **Python version:** 3.6.4
- **Faceswap version:** a799f769e4c48908c3efd64792384403392f2e82
- **Faceswap method:** GPU
Hi, I tested the train in using CPU, all work fin. There was the preview window. And it saved the train date in the models.
But when I changed to train on GPU, there was no preview window. It finished the train in 2 minute. Also there was no train date in the models directory.
I don't know what happened, maybe the video card I used.
I use a GTX 660 support CUDA 3.0.
Any help for it? Thank you.
Those are screenshot when using GPU for train.
Resource exhausted: OOM when allocating tensor with shape[3, 3, 128, 256]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.