giddyyupp / ganilla Goto Github PK
View Code? Open in Web Editor NEWOfficial Pytorch implementation of GANILLA
License: Other
Official Pytorch implementation of GANILLA
License: Other
Hello. Thank you for the contribution!
I have a novice question about the output channel. I changed the output channel in the base_option.py to 1 as I input gray images. But there is one error thrown out: RuntimeError: The size of tensor a (32) must match the size of tensor b (31) at non-singleton dimension 3
I can't figure out why this would happen. Could you please tell me what could be the reason and where should I change in the network.py file?
Thank you very much!
I'm using each one of the models to compare the output, and I note the Marc Brown and Serap Deliorman models always result in a black image.
Thank you very much @kevroy314 for the instructions for Windows 10.
I substituted the pip calls with conda as pip does not target the newly created environment in my case.
Executing the command
python .\test.py --dataroot .\datasets\monet2photo\testB --name AS_pretrained --model test --checkpoints_dir .\checkpoints\ --loadSize 512 --fineSize=512
I get the following error
RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 2.00 GiB total capacity; 94.70 MiB already allocated; 16.35 MiB free; 112.00 MiB reserved in total by PyTorch)
Which does not seem like a ganilla issue at all. After some googling I think this is due to a memory leak, a batch size, training not wrapped in a function call or ...
So this is more like a question: Have you ever encountered this? Is there some way for me to investigate further?
I can't download your Illustration dataset from your code with this problem:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/service.py", line 76, in start
stdin=PIPE)
File "/usr/lib/python3.7/subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.7/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'geckodriver': 'geckodriver'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "openlibraryImageDownloaderMain.py", line 48, in <module>
main(opts)
File "openlibraryImageDownloaderMain.py", line 17, in main
olh = s.OpenLibHelper(opts.openlib_username, opts.openlib_password)
File "/content/ganilla/datasets/ganilla/datasets/scraper_openlibrary.py", line 31, in __init__
self.browser = webdriver.Firefox(firefox_profile=profile)
File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/firefox/webdriver.py", line 164, in __init__
self.service.start()
File "/usr/local/lib/python3.7/dist-packages/selenium/webdriver/common/service.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: 'geckodriver' executable needs to be in PATH.
how can i continue my latest train,thank you.
All Serap Deliorman's books appears to be missing on Open Library site.
I there any other way to download the Serap Deliorman dataset?
Thanks,
Noa Barzilay
Thank you~ I can't understand why the models/networks.py include the UnetGenator part.I don't find any where that you have used that.
Hey,
I was trying to continue training from epoch 40 with options --continue_train --epoch_count 40
and it was a bit weird as it tried to load epoch 100 which did't exist atm :D. After a bit of investigation found that --epoch option is '100' on default although in CycleGAN project it is set to 'latest'.
This is unclear as this also not mentioned in FAQ
Hi!
I was wondering where the implementation of your model is. Since in your code I only see the implementation of pix2pix or cyclegan. Do I miss something..? Or have you done something special in your implementation?
I have tried to download the illustration dataset according to instruction and make the crawler work.
However, it appears that it does not download any file at all. Maybe because of some change in openlibrary.org.
Also, the script does not seem to run till the end and it raises below exception. I am also attaching the correspondent browser state.
Traceback (most recent call last): File "openlibraryImageDownloaderMain.py", line 48, in <module> main(opts) File "openlibraryImageDownloaderMain.py", line 26, in main olh.search_author(illustrator, dir_name, lower_case_list) File "~/ganilla/datasets/scraper_openlibrary.py", line 52, in search_author search_res = self.browser.find_element_by_id("searchResults") File "~/selenium/webdriver/remote/webdriver.py", line 360, in find_element_by_id return self.find_element(by=By.ID, value=id_) File "~/selenium/webdriver/remote/webdriver.py", line 978, in find_element 'value': value})['value'] File "~/selenium/webdriver/remote/webdriver.py", line 321, in execute self.error_handler.check_response(response) File "~/selenium/webdriver/remote/errorhandler.py", line 242, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: [id="searchResults"]
This isn't an issue for the developer, but if future folks want to run this on Windows 10, I was able to do it fair easily with the following steps:
python==3.7
pip install -r requirements.txt
pip install pillow==6.1 dominate scipy==1.1.0
conda install pytorch=1.3.1 torchvision=0.4.2 -c pytorch
kevintest
) and place your images in there. you'll get the best results with square imagespython test.py --dataroot ./datasets/kevintest --name Miyazaki_pretrained --model test --checkpoints_dir ./checkpoints/ --loadSize 1024 --fineSize=1024
It seems to work really well on scenery, but it fails on people and human-made objects in my opinion.
All of the options files are written with non-relative file paths which makes the recommended run commands in the README fail.
e.g. --checkpoints_dir
default value is: "/media/test/Samhi/GANILLA/fpn-gan/checkpoints/GANILLA"
Also this project works perfectly fine on Windows with no adjustments at all, the only issue was the data fetching .sh scripts, but manually downloading the monet2photo data set is very easy.
Hi, I have try the results that using the usnet as the backbone of the cyclegan for training our dataset. And I find the results are better than the res_net18 or 34. When I use the resnet ,the results include many artifacts. So I‘m curious that how the results the unet as the backbone for your dataset and why you last choose the resnet. Thank you, I'm sorry my English is not good.
Hello, i have tried the pre-trained "Miyazaki" model on colab without good result (using 1 pic, the result is a looks just a little filtered)
!python /content/drive/My\ Drive/ganilla/test.py --dataset_mode single --resize_or_crop none --dataroot /content/in1 --name /content/drive/My\ Drive/Miyazaki/ --model test --results_dir /content/
output:
creating web directory /content/drive/My Drive/Miyazaki/test_100
processing (0000)-th image... ['/content/in1/vlcsnap.png']`
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.