mtli / photosketch Goto Github PK
View Code? Open in Web Editor NEWCode for Photo-Sketching: Inferring Contour Drawings from Images :dog:
Home Page: http://www.cs.cmu.edu/~mengtial/proj/sketch/
License: Other
Code for Photo-Sketching: Inferring Contour Drawings from Images :dog:
Home Page: http://www.cs.cmu.edu/~mengtial/proj/sketch/
License: Other
Hi. Your work is interesting! I would like to try it. Can your pre-trained model be used to test on my images?
Hello,
I can't run thetest_pretained.sh
script after (apparent) correct Conda env configuration. (MacOS 10.14.6). After setting the dataDir in the test_pretained script, I get this error :
Traceback (most recent call last): File "test_pretrained.py", line 29, in <module> main() File "test_pretrained.py", line 21, in main for i, data in enumerate(dataset): File "/Users/kevin/Documents/PhotoSketch/data/custom_dataset_data_loader.py", line 41, in __iter__ for i, data in enumerate(self.dataloader): File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 819, in __next__ return self._process_data(data) File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 846, in _process_data data.reraise() File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/_utils.py", line 385, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate return {key: default_collate([d[key] for d in batch]) for key in elem} File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp> return {key: default_collate([d[key] for d in batch]) for key in elem} File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 53, in default_collate storage = elem.storage()._new_shared(numel) File "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/storage.py", line 128, in _new_shared return cls._new_using_filename(size) RuntimeError: error executing torch_shm_manager at "/Users/kevin/Documents/miniconda2/envs/sketch/lib/python3.6/site-packages/torch/bin/torch_shm_manager" at ../torch/lib/libshm/core.cpp:99
I suspect a PyTorch related error:
After trying to reinstall the Conda env, I noticed this output:
Installing collected packages: torch, Pillow, torchvision, dominate, pyzmq, jsonpointer, jsonpatch, tornado, websocket-client, torchfile, idna, chardet, urllib3, requests, visdom Found existing installation: torch 0.4.1 Uninstalling torch-0.4.1: Successfully uninstalled torch-0.4.1 Successfully installed Pillow-5.0.0 chardet-3.0.4 dominate-2.4.0 idna-2.8 jsonpatch-1.24 jsonpointer-2.0 pyzmq-18.1.0 requests-2.22.0 torch-1.3.0 torchfile-0.1.0 torchvision-0.4.1 tornado-6.0.3 urllib3-1.25.6 visdom-0.1.8.9 websocket-client-0.56.0
When I launch python -c "import torch; print(torch.__version__)"
in the Terminal and the "sketch" Conda environment, I got 1.3.0
which seems different from the 0.4 version expected in the environment file...)
Seems that the env file does not force torch version and Conda auto-update to 1.3.0 which might cause trouble. Any clue on this one?
Thank for for your further help!
How can I use different model(cycle_gan) with pretrained model?
Is there a way to output a list of paths to draw/plot instead of outputting a pixel image? Or an SVG or something?
I am trying to use this model to do kind of semantic contour/edge detection on images, that I could then plot using a robot arm on paper.
Thanks
Hello,
I'm trying to run the code, but each time i try to load the model i get this error :
RuntimeError: Error(s) in loading state_dict for ResnetGenerator:
Missing key(s) in state_dict: "model.1.bias", "model.4.bias", "model.7.bias", "model.10.conv_block.1.bias", "model.10.conv_block.6.bias", "model.11.conv_block.1.bias", "model.11.conv_block.6.bias", "model.12.conv_block.1.bias", "model.12.conv_block.6.bias", "model.13.conv_block.1.bias", "model.13.conv_block.6.bias", "model.14.conv_block.1.bias", "model.14.conv_block.6.bias", "model.15.conv_block.1.bias", "model.15.conv_block.6.bias", "model.16.conv_block.1.bias", "model.16.conv_block.6.bias", "model.17.conv_block.1.bias", "model.17.conv_block.6.bias", "model.18.conv_block.1.bias", "model.18.conv_block.6.bias", "model.19.bias", "model.22.bias". .....
any suggestions to solve this issue ?
Thank you
Is it possible to reduce thickness of line during training & it is possible during training convert to smooth vector
I choose the resent_9_block ,I seems the generator model error...
RuntimeError: Error(s) in loading state_dict for ResnetGenerator
Missing key(s) in state_dict: "model.1.bias", "model.4.bias", "model.7.bias", "model.10.conv_block.1.bias", "model.10.conv_block.6.bias", "model.11.conv_block.1.bias", "model.11.conv_block.6.bias....
Hey man, I built u a ipynb file for collab. u can run it in like 3 seconds now for free.
Heres the colab: https://colab.research.google.com/drive/1Jd_bwcu96KNk8ev0E3gCpL2ryte15U7B?usp=sharing
Something I didn't think to check for a while, but the model actually leaves a lot of residual data in off-white colours that aren't visible to the human eye, but which will be picked up on by other ML algorithms.
In this example, if you adjust the levels, there is actually more data encoded about his hairline than is first apparent:
Hi, mtli,
Thanks for your wonderful work.
I'm trying to download your dataset using the get-images.py script. However, I got the following issue,
File "get_images.py", line 27, in <module> urllib.request.urlretrieve(row[URL_COL], os.path.join(out_dir, name)) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 247, in urlretrieve with contextlib.closing(urlopen(url, data)) as fp: File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 531, in open response = meth(req, response) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 641, in http_response 'http', request, response, code, msg, hdrs) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 569, in error return self._call_chain(*args) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/home/deng/anaconda3/lib/python3.7/urllib/request.py", line 649, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden
It seems that the service is down or my IP address is blocked. But I was also trying to use VPN, which has failed, either.
Could you kindly help me out.
Hello! Thank you for your open source code.
I don't seem to have found the code for the evaluation indicators. Can you provide it?
When I run the code, I keep getting the following error. Any suggestions would be appreciated!
Traceback (most recent call last):
File "/root/PycharmProjects/Photo-Sketching/train.py", line 68, in
main()
File "/root/PycharmProjects/Photo-Sketching/train.py", line 39, in main
model.optimize_parameters() # error
File "/root/PycharmProjects/Photo-Sketching/models/pix2pix_model.py", line 139, in optimize_parameters
self.backward_D()
File "/root/PycharmProjects/Photo-Sketching/models/pix2pix_model.py", line 106, in backward_D
pred_real = self.netD(real_AB) # error------------------------------------------------
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/root/PycharmProjects/Photo-Sketching/models/networks.py", line 423, in forward
return self.model(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 6, 4, 4], expected input[1, 4, 256, 256] to have 6 channels, but got 4 channels instead
Hi.. trying to use the pretrained here.
I've set up the environment with conda.
Then I clone the repo to e:\PhotoSketch
download and unpack the pretrained.zip to e:\PhotoSketch\pretrained
(this results in e:\PhotoSketch\pretrained\latest_net_(D|G).pth being created)
I run (a windows .bat conversion of the .sh) test_pretrained.bat
SET DATADIR=E:\PhotoSketch\pretrained
python test_only.py --name pretrained --dataset_mode test_dir --dataroot examples --results_dir %DATADIR%\Exp\PhotoSketch\Results\ --checkpoints_dir %DATADIR%\Exp\PhotoSketch\Checkpoints\ --model pix2pix --which_direction AtoB --norm batch --input_nc 3 --output_nc 1 --which_model_netG resnet_9blocks --no_dropout
Which results in an error saying the pretrained data couldn't be found (but looking in the wrong location).. E:\\PhotoSketch\\pretrained\\Exp\\PhotoSketch\\Checkpoints\\pretrained\\latest_net_G.pth
(sketch) E:\PhotoSketch>scripts\test_pretrained.bat
(sketch) E:\PhotoSketch>SET DATADIR=E:\PhotoSketch\pretrained
(sketch) E:\PhotoSketch>python test_only.py --name pretrained --dataset_mode test_dir --dataroot E:\pythondf\pythondf\python-3.6.3.amd64\PhotoSketch\examples --results_dir E:\PhotoSketch\pretrained\Exp\PhotoSketch\Results\ --checkpoints_dir E:\PhotoSketch\pretrained\Exp\PhotoSketch\Checkpoints\ --model pix2pix --which_direction AtoB --norm batch --input_nc 3 --output_nc 1 --which_model_netG resnet_9blocks --no_dropout
------------ Options -------------
aspect_ratio: 1.0
aug_folder: width-5
batchSize: 1
checkpoints_dir: E:\PhotoSketch\pretrained\Exp\PhotoSketch\Checkpoints\
color_jitter: False
crop: False
dataroot: E:\pythondf\pythondf\python-3.6.3.amd64\PhotoSketch\examples
dataset_mode: test_dir
display_id: 1
display_port: 8097
display_server: http://localhost
display_winsize: 256
file_name:
fineSize: 256
gpu_ids: [0]
how_many: 50
img_mean: None
img_std: None
init_type: normal
input_nc: 3
inverse_gamma: False
isTrain: False
jitter_amount: 0.02
loadSize: 286
lst_file: None
max_dataset_size: inf
model: pix2pix
nGT: 5
nThreads: 6
n_layers_D: 3
name: pretrained
ndf: 64
ngf: 64
no_dropout: True
no_flip: False
norm: batch
ntest: inf
output_nc: 1
phase: test
pretrain_path:
render_dir: sketch-rendered
resize_or_crop: resize_and_crop
results_dir: E:\PhotoSketch\pretrained\Exp\PhotoSketch\Results\
rot_int_max: 3
rotate: False
serial_batches: False
stroke_dir:
stroke_no_couple: False
suffix:
which_direction: AtoB
which_epoch: latest
which_model_netD: basic
which_model_netG: resnet_9blocks
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDirDataset] was created
pix2pix
initialization method [normal]
Traceback (most recent call last):
File "test_only.py", line 17, in <module>
model = create_model(opt)
File "E:\PhotoSketch\models\models.py", line 10, in create_model
model.initialize(opt)
File "E:\PhotoSketch\models\pix2pix_model.py", line 30, in initialize
self.load_network(self.netG, 'G', opt.which_epoch)
File "E:\PhotoSketch\models\base_model.py", line 56, in load_network
network.load_state_dict(torch.load(save_path))
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\site-packages\torch\serialization.py", line 265, in load
f = open(f, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'E:\\PhotoSketch\\pretrained\\Exp\\PhotoSketch\\Checkpoints\\pretrained\\latest_net_G.pth'
If I copy both the the *.pth
files to that location and relaunch, it produces a bit more output but eventually dies with..
model [Pix2PixModel] was created
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "E:\PhotoSketch\test_only.py", line 20, in <module>
for i, data in enumerate(dataset):
File "E:\PhotoSketch\data\custom_dataset_data_loader.py", line 41, in __iter__
for i, data in enumerate(self.dataloader):
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\site-packages\torch\utils\data\dataloader.py", line 417, in __iter__
return DataLoaderIter(self)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\site-packages\torch\utils\data\dataloader.py", line 234, in __init__
w.start()
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "C:\Users\MyUserId\Miniconda3\envs\sketch\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Hello, I can not download the image, the error is File "/root/anaconda3/envs/sketch/lib/python3.9/urllib/request.py", line 641, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden
I am from Chinese Mainland, Maybe access to this area has been banned. Can you share the data with me? My email is [email protected]
Thank you very much!
Hello, thanks for providing the code. I am really interested in the work. I want to make the line generated by the model thinner. So I trained it as you inform. And I changed the aug_folder from width-5 to width-3. The hyper-parameter I used will appear in the end. But after training, the model can just generate a picture containing nothing. Can you help me solve this problem?
python train.py
--name width3
--dataroot ${dataDir}/ContourDrawing/
--checkpoints_dir ${dataDir}/Photosketch/Checkpoints/
--model pix2pix
--which_direction AtoB
--dataset_mode 1_to_n
--no_lsgan
--norm batch
--pool_size 0
--output_nc 1
--which_model_netG resnet_9blocks
--which_model_netD global_np
--batchSize 2
--lambda_A 200
--lr 0.0002
--aug_folder width-3
--crop --rotate --color_jitter
--niter 400
--niter_decay 400 \
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.