vijishmadhavan / artline Goto Github PK
View Code? Open in Web Editor NEWA Deep Learning based project for creating line art portraits.
License: MIT License
A Deep Learning based project for creating line art portraits.
License: MIT License
Hi!
I would like to share one small tool that you may found useful. Right now I use it to compare differences between u2net and artline, it allows to quickly switch between two or more models and observe the differences (keys 2, 3).
It uses ffhq as dataset
The easiest way to get started is to simply try out on Colab: https://colab.research.google.com/github/vijishmadhavan/Light-Up/blob/master/ArtLine(Try_it_on_Colab).ipynb
seems the notebook name changed but it hasn't been updated in the doc above.
Thanks.
Hi, I have built a simple Flask app to generate artline image and open sourced here: https://github.com/jwenjian/artline-demo
Features:
python3 app.py
I'm new to GitHub and use it mainly for playing games, but I don't know how to get the link to a game right from GitHub. All I want to do is play a video game from this website during winter break, yet I don't know how. Can someone help me please?
Hi, great work!
This is looks better than APDrawingGAN2 and U-2-Net.
Can you alter the code so it keeps the original aspect ratio and resolution for output image?
I've got this working on the Google collab document, but am running into issues with Flask with the error AttributeError: Can't get attribute 'FeatureLoss' on <module '__main__' from '/home/ubuntu/anaconda3/bin/flask'>
. I believe this is related to https://discuss.pytorch.org/t/error-loading-saved-model/8371 and https://stackoverflow.com/questions/27732354/unable-to-load-files-using-pickle-and-multiple-modules where some solutions have been suggested, but I can't work out how to get it working in this case.
Basic Flask app (this is of course incomplete, but throws the AttributeError error when trying to load the model):
from flask import Flask
app = Flask(__name__)
import fastai
import time
from fastai.vision import *
from fastai.utils.mem import *
from fastai.vision import open_image, load_learner, image, torch
import numpy as np
import urllib.request
import PIL.Image
from io import BytesIO
import torchvision.transforms as T
from PIL import Image
import requests
from io import BytesIO
import fastai
from fastai.vision import *
from fastai.utils.mem import *
from fastai.vision import open_image, load_learner, image, torch
import numpy as np
import urllib.request
import PIL.Image
from io import BytesIO
import torchvision.transforms as T
class FeatureLoss(nn.Module):
def __init__(self, m_feat, layer_ids, layer_wgts):
super().__init__()
self.m_feat = m_feat
self.loss_features = [self.m_feat[i] for i in layer_ids]
self.hooks = hook_outputs(self.loss_features, detach=False)
self.wgts = layer_wgts
self.metric_names = ['pixel',] + [f'feat_{i}' for i in range(len(layer_ids))
] + [f'gram_{i}' for i in range(len(layer_ids))]
def make_features(self, x, clone=False):
self.m_feat(x)
return [(o.clone() if clone else o) for o in self.hooks.stored]
def forward(self, input, target):
out_feat = self.make_features(target, clone=True)
in_feat = self.make_features(input)
self.feat_losses = [base_loss(input,target)]
self.feat_losses += [base_loss(f_in, f_out)*w
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.feat_losses += [base_loss(gram_matrix(f_in), gram_matrix(f_out))*w**2 * 5e3
for f_in, f_out, w in zip(in_feat, out_feat, self.wgts)]
self.metrics = dict(zip(self.metric_names, self.feat_losses))
return sum(self.feat_losses)
def __del__(self): self.hooks.remove()
path = Path(".")
learn=load_learner(path, 'ArtLine_650.pkl')
@app.route("/", methods=["POST"])
def home():
# take user's sent image and turn it into line drawing
return ''
if __name__ == '__main__':
app.run(debug = True)
How would I get this working with Flask?
Hello vijishmadhavan:
After I run step p,img_hr,b = model.predict(img_fast), memory will increase 10M。
As long as you keep running, the memory will keep increasing, so I think it is memory leak.
Please help me, Thank you very much!
I made a demo of pytorch, in your work,
It is easier to show
https://github.com/Linzmin1927/Airline_torch
Im looking to have the colab save the file instead of show it inline. I've got it working somewhat, but the saved image is "inverted".
Any hints on getting the image to save nicely?
how to train your model?
The data set is not given。
path_hr = Path('/content/gdrive/My Drive/Apdrawing/draw tiny')
path_lr = Path('/content/gdrive/My Drive/Apdrawing/Tiny Real')
path_hr3 = Path('/content/gdrive/My Drive/Apdrawing/drawing')
path_lr3= Path('/content/gdrive/My Drive/Apdrawing/Real')
i'm trying input a url.such as "https://www.freepik.com/free-photo/portrait-white-man-isolated_3199590.htm#page=1&query=Portrait&position=0".After run ,it has a error:cannot identify image file <_io.BytesIO object at 0x7f5d0c1f5780>,why?
Python 3.9
ERROR: Command errored out with exit status 1: command: 'E:\python_projects\ArtLine\.venv\Scripts\python.exe' 'E:\python_projects\ArtLine\.venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\PAWE~1\AppData\Local\Temp\tmph2xvoder' cwd: C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck Complete output (57 lines): running dist_info creating C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info writing C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info\PKG-INFO writing dependency_links to C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info\dependency_links.txt writing requirements to C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info\requires.txt writing top-level names to C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info\top_level.txt writing manifest file 'C:\Users\Paweł\AppData\Local\Temp\pip-install-68zlosqp\bottleneck\pip-wheel-metadata\Bottleneck.egg-info\SOURCES.txt' Error in sitecustomize; set PYTHONVERBOSE for traceback: SyntaxError: (unicode error) 'utf-8' codec can't decode byte 0xb3 in position 0: invalid start byte (sitecustomize.py, line 21) Traceback (most recent call last): File "E:\python_projects\ArtLine\.venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 207, in <module> main() File "E:\python_projects\ArtLine\.venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 197, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "E:\python_projects\ArtLine\.venv\lib\site-packages\pip\_vendor\pep517\_in_process.py", line 69, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\build_meta.py", line 156, in prepare_metadata_for_build_wheel self.run_setup() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\build_meta.py", line 236, in run_setup super(_BuildMetaLegacyBackend, File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\build_meta.py", line 142, in run_setup exec(compile(code, __file__, 'exec'), locals()) File "setup.py", line 196, in <module> setup(**metadata) File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\__init__.py", line 145, in setup return distutils.core.setup(**attrs) File "C:\Python38\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Python38\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\command\dist_info.py", line 31, in run egg_info.run() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\command\egg_info.py", line 296, in run self.find_sources() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\command\egg_info.py", line 303, in find_sources mm.run() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\command\egg_info.py", line 534, in run self.add_defaults() File "E:\python_projects\ArtLine\.venv\lib\site-packages\setuptools\command\egg_info.py", line 570, in add_defaults sdist.add_defaults(self) File "C:\Python38\lib\distutils\command\sdist.py", line 228, in add_defaults self._add_defaults_ext() File "C:\Python38\lib\distutils\command\sdist.py", line 311, in _add_defaults_ext build_ext = self.get_finalized_command('build_ext') File "C:\Python38\lib\distutils\cmd.py", line 299, in get_finalized_command cmd_obj.ensure_finalized() File "C:\Python38\lib\distutils\cmd.py", line 107, in ensure_finalized self.finalize_options() File "setup.py", line 75, in finalize_options import numpy File "E:\python_projects\ArtLine\.venv\lib\site-packages\numpy\__init__.py", line 305, in <module> _win_os_check() File "E:\python_projects\ArtLine\.venv\lib\site-packages\numpy\__init__.py", line 302, in _win_os_check raise RuntimeError(msg.format(__file__)) from None RuntimeError: The current Numpy installation ('E:\\python_projects\\ArtLine\\.venv\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https://tinyurl.com/y3dm3h86 ---------------------------------------- ERROR: Command errored out with exit status 1: 'E:\python_projects\ArtLine\.venv\Scripts\python.exe' 'E:\python_projects\ArtLine\.venv\lib\site-packages\pip\_vendor\pep517\_in_process.py' prepare_metadata_for_build_wheel 'C:\Users\PAWE~1\AppData\Local\Temp\tmph2xvoder' Check the logs for full command output.
Looks like your collab links may be malformed. Something to look at there.
Sir , Can you tell me which this dataset link ?
`path = Path('/content/gdrive/My Drive/Apdrawing')
path_hr = Path('/content/gdrive/My Drive/Apdrawing/draw tiny')
path_lr = Path('/content/gdrive/My Drive/Apdrawing/Tiny Real')
path_hr3 = Path('/content/gdrive/My Drive/Apdrawing/drawing')
path_lr3= Path('/content/gdrive/My Drive/Apdrawing/Real')`
Hi, @vijishmadhavan,
Thanks for sharing this nice repo! Just wonder if there is relevant paper or report that describes the method you present here?
Hello,
pip install runway-python fastai==1.0.61 numpy==1.17.2 pandas==1.1.2 torch==1.6.0 torchvision===0.7.0 fails.
Some versions need update. Could you fix it please ?
Thank you.
In your trainning code, you use the below function to preprocess the training data.
def get_data(bs,size):
data = (src.label_from_func(lambda x: path_hr/x.name)
.transform(get_transforms(xtra_tfms=[gradient()]), size=size, tfm_y=True)
.databunch(bs=bs,num_workers = 0).normalize(imagenet_stats, do_y=True))
data.c = 3
return data
I'm just wondering if you are calculating gradient images for both the input and target images. Does this mean the network you are training also takes a gradient image as input and generates a gradient image as the output? If so how do you get the final results from the output gradient image?
hi!so thankful for your exciting work! I had a problem when I ran it in colab and don't know how to deal with it. Could you give me some advice?
when I run !pip install -r colab_requirements.txt:
Collecting fastai==1.0.61 (from -r colab_requirements.txt (line 1))
Downloading fastai-1.0.61-py3-none-any.whl (239 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 239.2/239.2 kB 4.8 MB/s eta 0:00:00
Collecting numpy==1.17.2 (from -r colab_requirements.txt (line 2))
Downloading numpy-1.17.2.zip (6.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.5/6.5 MB 43.7 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting pandas==1.1.2 (from -r colab_requirements.txt (line 3))
Downloading pandas-1.1.2.tar.gz (5.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2/5.2 MB 99.7 MB/s eta 0:00:00
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
how can i get your code, so i can run it on my local computer?
FYI
Hello! When I use the requirements.txt to install libraries, it failed with the error:
ERROR:Could not build wheels for bottleneck which use PEP 517 and cannot be installed directly
Could you please tell me how to fix it? Thank you!
The whole error messages are as below:
ERROR: Command errored out with exit status 1:
command: /home/tigershan/anaconda3/envs/ArtLine/bin/python /home/tigershan/anaconda3/envs/ArtLine/lib/python3.7/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmp3rj4usov
cwd: /tmp/pip-install-ne7kzdj5/bottleneck_4132ceeb7f0a474ca245fb029b472ac0
Complete output (122 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.7
creating build/lib.linux-x86_64-3.7/bottleneck
copying bottleneck/init.py -> build/lib.linux-x86_64-3.7/bottleneck
copying bottleneck/_version.py -> build/lib.linux-x86_64-3.7/bottleneck
copying bottleneck/_pytesttester.py -> build/lib.linux-x86_64-3.7/bottleneck
creating build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/move_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/init.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/nonreduce_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/input_modification_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/nonreduce_axis_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/reduce_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/list_input_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/memory_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/util.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
copying bottleneck/tests/scalar_input_test.py -> build/lib.linux-x86_64-3.7/bottleneck/tests
creating build/lib.linux-x86_64-3.7/bottleneck/src
copying bottleneck/src/bn_template.py -> build/lib.linux-x86_64-3.7/bottleneck/src
copying bottleneck/src/init.py -> build/lib.linux-x86_64-3.7/bottleneck/src
copying bottleneck/src/bn_config.py -> build/lib.linux-x86_64-3.7/bottleneck/src
creating build/lib.linux-x86_64-3.7/bottleneck/benchmark
copying bottleneck/benchmark/init.py -> build/lib.linux-x86_64-3.7/bottleneck/benchmark
copying bottleneck/benchmark/autotimeit.py -> build/lib.linux-x86_64-3.7/bottleneck/benchmark
copying bottleneck/benchmark/bench.py -> build/lib.linux-x86_64-3.7/bottleneck/benchmark
copying bottleneck/benchmark/bench_detailed.py -> build/lib.linux-x86_64-3.7/bottleneck/benchmark
creating build/lib.linux-x86_64-3.7/bottleneck/slow
copying bottleneck/slow/init.py -> build/lib.linux-x86_64-3.7/bottleneck/slow
copying bottleneck/slow/move.py -> build/lib.linux-x86_64-3.7/bottleneck/slow
copying bottleneck/slow/reduce.py -> build/lib.linux-x86_64-3.7/bottleneck/slow
copying bottleneck/slow/nonreduce.py -> build/lib.linux-x86_64-3.7/bottleneck/slow
copying bottleneck/slow/nonreduce_axis.py -> build/lib.linux-x86_64-3.7/bottleneck/slow
UPDATING build/lib.linux-x86_64-3.7/bottleneck/_version.py
set build/lib.linux-x86_64-3.7/bottleneck/_version.py to '1.3.2'
running build_ext
running config
compiling '_configtest.c':
#pragma GCC diagnostic error "-Wattributes"
int attribute((optimize("O3"))) have_attribute_optimize_opt_3(void*);
int main(void)
{
return 0;
}
gcc -pthread -B /home/tigershan/anaconda3/envs/ArtLine/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -c _configtest.c -o _configtest.o
failure.
removing: _configtest.c _configtest.o
compiling '_configtest.c':
#ifndef __cplusplus
static inline int static_func (void)
{
return 0;
}
inline int nostatic_func (void)
{
return 0;
}
#endif
int main(void) {
int r1 = static_func();
int r2 = nostatic_func();
return r1 + r2;
}
gcc -pthread -B /home/tigershan/anaconda3/envs/ArtLine/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -c _configtest.c -o _configtest.o
failure.
removing: _configtest.c _configtest.o
compiling '_configtest.c':
#ifndef __cplusplus
static inline int static_func (void)
{
return 0;
}
inline int nostatic_func (void)
{
return 0;
}
#endif
int main(void) {
int r1 = static_func();
int r2 = nostatic_func();
return r1 + r2;
}
gcc -pthread -B /home/tigershan/anaconda3/envs/ArtLine/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -c _configtest.c -o _configtest.o
failure.
removing: _configtest.c _configtest.o
compiling '_configtest.c':
#ifndef __cplusplus
static __inline int static_func (void)
{
return 0;
}
__inline int nostatic_func (void)
{
return 0;
}
#endif
int main(void) {
int r1 = static_func();
int r2 = nostatic_func();
return r1 + r2;
}
ERROR: Failed building wheel for bottleneck
Failed to build bottleneck
ERROR: Could not build wheels for bottleneck which use PEP 517 and cannot be installed directly
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.