Code Monkey home page Code Monkey logo

Comments (132)

taki0112 avatar taki0112 commented on September 23, 2024 148

I'm talking to the company about whether it's okay to release a pre-trained model.
Please wait a little. Sorry.

from ugatit.

sjisjaksjk avatar sjisjaksjk commented on September 23, 2024 39

from ugatit.

taki0112 avatar taki0112 commented on September 23, 2024 35
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.

  2. Also, We published the selfie2anime datasets we used in the paper.

  3. And, we fixed code in smoothing

  4. In the test image, I recommend that your face be in the center.

from ugatit.

Ledarium avatar Ledarium commented on September 23, 2024 20

We want to make anime, please

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024 19

Alright, I've been training for over a day now and thought I'd share my pre-trained model.

Video

https://twitter.com/nathangloverAUS/status/1160188181414760449

Examples

https://twitter.com/nathangloverAUS/status/1160167218266570752

anime-example

Pre-trained model

https://www.kaggle.com/t04glovern/ugatit-selfie2anime-pretrained

from ugatit.

eddybogosian avatar eddybogosian commented on September 23, 2024 17

Guys just chill for a moment. Taki already said that they're talking to the company about it. It's been 2 days. Calm down and wait. They know that we want this, flooding won't help.

In the meantime, why don't you try something yourself? You can use Microsofts Azure or Amazon AWS to train this type of network. Maybe you can even come up with something better! Who knows right?

from ugatit.

SpectrePrediction avatar SpectrePrediction commented on September 23, 2024 15

最近不知为啥不能科学上网,kaggle用电子邮件注册又验证不过...
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人...

同样不敢发英文,因为我英文太差 (QAQ)
我总结一下我看到的 数据集 和 预训练模型 :
ps:十分感谢他们的分享

来自heart4lor的 selfie2anime数据集 大小: 约110 MB
通过谷歌下载2*3500张训练集
通过百度下载密码:1exs
据他本人所说此数据集还有改进空间

来自t04glovern的 selfie2anime预训练模型
百度链接来自知乎 密码:50lt
他本人的回答,在他的分支中有resize.py工具来缩小图像大小
kaggle链接
这边提供他的数据集下载地址
人:crcv.ucf.edu/data/Selfie
动漫:gwern.net/Danbooru2018

来自thewaifuai的 cat2dog预训练模型
百度链接 密码:aw35
kaggle链接
或者你可以看看他的回答
cat2dog 数据集百度链接 密码:ryvj
他会把猫变成狗,反之亦然

来自知乎的 selfie2anime数据集
年轻女性-1000张-512px:百度链接 密码:udlm
二次元-1000张-512px:百度链接 密码:d1yg

以下为问答中看到的链接,我不确定他们是否对你有帮助:
http://www.seeprettyface.com/mydataset_page2.html
里面有 人 和 动漫 的高质量数据集,并且是通过 百度下载

这是一条 可以参考的问答 ,他提供了获取数据集的参考

似乎就这么多了
顺便一提,如果你的电脑没有足够的显存,我找到的方法是:
light = True,
调低 iteration 和 epoch?
缩小图像大小?
或者使用预训练模型测试
或者,可以等待更完善的模型?
希望对你有帮助

from ugatit.

mickae1 avatar mickae1 commented on September 23, 2024 14

If you don't want to share, and I can understand. You should create a website that offer the possibility to convert photo into anime. The website will become very popular. And you can get some money with the publicity

from ugatit.

thewaifuai avatar thewaifuai commented on September 23, 2024 14

I have published a pre-trained model for cat2dog on kaggle. Please let me know if you have any issues with it. I saved the results in this pdf so you can see what it looks like:
results.pdf I used the cat2dog dataset from DRIT.

It takes 4+ days to train cropped face dataset and 16+ days to train cropped body dataset on Nvidia GPUs (estimates). Since it takes many days to train the dataset once, and it takes many iterations of training it will take some time but eventually many people will publish and share their pre-trained models in the weeks to come. Datasets can be found at DRIT. For selfie2anime you can use datasets selfie and anime face dataset. Other potential anime face dataset sources: thiswaifudoesnotexist,
animeGAN for generating anime images and a one click download anime face dataset.
UGATIT is quite general, you really just need a folder of anime faces and a folder of human faces and it figures the rest by itself.

from ugatit.

cpury avatar cpury commented on September 23, 2024 12

FYI, I'm using a quickly-assembled, crappy dataset and a relatively slow cloud GPU machine. Also, I reduced the resolution to 100x100 pixels (256 just takes too long for me). The results look like this after one day of training:

Screen Shot 2019-08-09 at 08 19 23

Screen Shot 2019-08-09 at 08 20 25

Not too bad, but still a lot of room for improvement :)

What I can recommend if you'd like to create a better one:

  • Make sure the two datasets have similar poses / distances to the face. You can tell in mine that the anime data is much more close-up to the face and so the model learned that part of the transformation is "zooming in".
  • Make sure the anime dataset is diverse. Right now, in my model, everything from black men to old women gets transformed into 12-yo-looking girls with giant eyes, white skin, and bangs. I'd really rather it learns something more diverse...
  • Get a serious cloud machine and expect to spend some time. The batch size of 1 is killing me 😅

from ugatit.

coldliang avatar coldliang commented on September 23, 2024 11

我发**一直发发发**死了?------------------ 原始邮件 ------------------ 发件人: "Max"[email protected] 发送时间: 2019年8月9日(星期五) 下午5:29 收件人: "taki0112/UGATIT"[email protected]; 抄送: "Subscribed"[email protected]; 主题: Re: [taki0112/UGATIT] Pretrained model? (#5) Yeah, batch size of 1 is necessary for cycle GANs. Another thing I've learned: You can increase the speed of your training quite significantly by already providing the right image size. Because otherwise the training procedure will take loads of time just resizing images. Here's what it says in the paper: All models are trained using Adam [19] with β1=0.5 and β2=0.999. For data augmentation, we flipped the images horizontally with a probability of 0.5, resized them to 286 x 286, and random cropped them to 256 x 256. I would first use imagemagick to batch-resize all your data to 286x286 or similar. I think that could save you a day or so in training time. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

你在项目的最上方取消对这个项目的“watching”就行了,就收不到邮件了噢,那个"unwatch"那里

from ugatit.

WinHGGG avatar WinHGGG commented on September 23, 2024 9

And now it gave me some FATAL results.
TIM图片20190815215028

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024 8

@t04glovern could you please share the data sets?

I just used the following:

I'll try to zip them up and upload a bundle when I get a tick

My ratios are:

testA (anime) - 9653
testB (selfie) - 5121
trainA (anime) - 37183
trainB (selfie) - 22259

I have a fork going on https://github.com/t04glovern/UGATIT. There's a resize.py tool you can use to reduce the image sizes if you need.

from ugatit.

heart4lor avatar heart4lor commented on September 23, 2024 8

Here's some manually picked checkpoint samples after ~15 hours training on GTX 1080

image image
image image
image image
image

Dataset:

And I did some preparations:

According to the paper 5.2. Dataset, I wrote this script and selected 3400 female selfies(but the selfie labels seems contains some errors so there's still some male selfies) as trainset and 100 as testset. relatively, I choose the biggest 3500 anime pictures as anime trainset and testset.

You can download this 2*3500 dataset from here.

This dataset still can be improved, especially the selfie dataset. If the angle and position can be more corresponding with the anime, I think the preference will be better. as a matter of fact, these samples manually picked above are those selfies relatively good.

from ugatit.

thewaifuai avatar thewaifuai commented on September 23, 2024 7

@thewaifuai Hello! Would you like to publish the pre-trained model of selfie2anime in future? Thanks.

Yes

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024 6

If you open a patreon or something, we can subscribe for your pre-trained model 🗡

from ugatit.

thewaifuai avatar thewaifuai commented on September 23, 2024 6

I have published a pre-trained model for cat2dog on kaggle. Please let me know if you have any issues with it. I saved the results in this pdf so you can see what it looks like:
results.pdf I used the cat2dog dataset from DRIT.

@thewaifuai I'm not sure why but your cat2dog kaggle link doesn't work?

Oops kaggle datasets are private by default, I had to manually make it public. It is now public and should work.

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024 6

Here's a couple of mine, just over 18 hours of training (rtx2080ti) on the selfie & anime datasets linked in the other issue.

https://twitter.com/nathangloverAUS/status/1159871270986534913

1
2
3

from ugatit.

vulcanfk avatar vulcanfk commented on September 23, 2024 5

Alternatively, it would be amazing if you could share the selfie2anime dataset.

See issue #6

from ugatit.

leemengtw avatar leemengtw commented on September 23, 2024 5

I got this result by using taki's model:

image

from ugatit.

thewaifuai avatar thewaifuai commented on September 23, 2024 3

@tafseerahmed use the --light True option, if that does not work run pkill python3 and then try again with the --light True option. This runs the light version of UGATIT.

from ugatit.

FutaAlice avatar FutaAlice commented on September 23, 2024 3

magic code for pan.baidu.com: 1qo9prDlmVSm5aFz_H1jntg

from ugatit.

ht290 avatar ht290 commented on September 23, 2024 2

image

My samples after 3 days...

from ugatit.

thorikawa avatar thorikawa commented on September 23, 2024 2

@taki0112 Thank you for uploading models! I had successfully downloaded them but got the following error when I tried to extract them.

$ unzip ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
Archive:  /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
warning [/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip]:  4294967296 extra bytes at beginning or within zipfile
  (attempting to process anyway)
file #1:  bad zipfile offset (local header sig):  4294967296
  (attempting to re-compensate)
  inflating: checkpoint/.DS_Store    
  inflating: __MACOSX/checkpoint/._.DS_Store  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.meta  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/checkpoint  
  inflating: __MACOSX/checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/._checkpoint  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.data-00000-of-00001  
  error:  invalid compressed data to inflate
file #12:  bad zipfile offset (local header sig):  651653367
  (attempting to re-compensate)
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.index 

Also zip-file-check failed as following.

$ zip -T ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
    zip warning: unexpected signature on disk 0 at 672935689

    zip warning: archive not in correct format: /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
    zip warning: (try -F to attempt recovery)

zip error: Zip file structure invalid (/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip)

Seems like zip files are corrupted?

from ugatit.

leemengtw avatar leemengtw commented on September 23, 2024 2

Please enter new pictures directly, download new pictures online to test, and see how the actual effect is? 请直接输入新的图片,在网上下载新的图来测试,看看实际效果怎样?

@QQ2737499951 Good point. I did try lots of selfies of myself and my friends, and in my own experience so far, I would say it's about 20% (1 out of 5) that you can get a relatively good result. 你說的對,我也試了不少自己跟朋友的照片。依照目前我的經驗,大概每嘗試 5 張你能得到 1 張還能看的結果。以下都是新的圖片:

image

from ugatit.

neuralphene avatar neuralphene commented on September 23, 2024 1
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================

Sorry, you can't view or download this file at present.

Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.

A workaround for this: Add the file to your Drive, then Make a Copy of it. You can then download the copy that is stored in your drive.

from ugatit.

thorikawa avatar thorikawa commented on September 23, 2024 1

@taki0112 Thank you for uploading models! I had successfully downloaded them but got the following error when I tried to extract them.

$ unzip ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
Archive:  /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
warning [/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip]:  4294967296 extra bytes at beginning or within zipfile
  (attempting to process anyway)
file #1:  bad zipfile offset (local header sig):  4294967296
  (attempting to re-compensate)
  inflating: checkpoint/.DS_Store    
  inflating: __MACOSX/checkpoint/._.DS_Store  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.meta  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/checkpoint  
  inflating: __MACOSX/checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/._checkpoint  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.data-00000-of-00001  
  error:  invalid compressed data to inflate
file #12:  bad zipfile offset (local header sig):  651653367
  (attempting to re-compensate)
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.index 

Also zip-file-check failed as following.

$ zip -T ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
    zip warning: unexpected signature on disk 0 at 672935689

    zip warning: archive not in correct format: /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
    zip warning: (try -F to attempt recovery)

zip error: Zip file structure invalid (/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip)

Seems like zip files are corrupted?

I believe the issue is to do with downloading from a browser and it not being able to handle the huge file. I found that following this guide I was able to download the zip using curl / wget and then extract it without issues

@t04glovern I have retried using gdown.pl mentioned in stackoverflow, but still got the same error when I extracted them. Still seems zip files are corrupted.

Can you share md5sum of your 100_epoch_selfie2anime_checkpoint.zip?
Mine is as below.

$ md5sum 100_epoch_selfie2anime_checkpoint.zip 
1acedc844eca4605bad41ef049fba401  100_epoch_selfie2anime_checkpoint.zip

from ugatit.

hitechbeijing avatar hitechbeijing commented on September 23, 2024 1

link only available for one hour because the data traffic is not free.

from ugatit.

opentld avatar opentld commented on September 23, 2024 1

I got this result by using taki's model:

quite good ~!!! @leemengtaiwan

from ugatit.

chaowentao avatar chaowentao commented on September 23, 2024

I'm talking to the company about whether it's okay to release a pre-trained model.
Please wait a little. Sorry.

I train the model in my own dataset. The result looks not very well. Hopefully you share your pre-trained model @taki0112

from ugatit.

cpury avatar cpury commented on September 23, 2024

Alternatively, it would be amazing if you could share the selfie2anime dataset.

from ugatit.

kioyong avatar kioyong commented on September 23, 2024

Can't wait to want a pre-trained model~ please~

from ugatit.

hensat avatar hensat commented on September 23, 2024

Let's hope the company will allow you to place the model

from ugatit.

hafniz avatar hafniz commented on September 23, 2024

It would be really helpful if you could release some existing model for our reference. Please~

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

The thing we need to understand is that no one likes begging and pleading. These people have worked hard on something, and it's completely up to them if they choose to release their models or datasets. I appreciate the fact that they open-sourced their code. Personally, I wouldn't mind even paying for their models and dataset. In the meantime let's stop flooding this thread and wait for @taki0112 's response.

from ugatit.

amberjxd avatar amberjxd commented on September 23, 2024

@thewaifuai Hello! Would you like to publish the pre-trained model of selfie2anime in future? Thanks.

from ugatit.

LiuShaohan avatar LiuShaohan commented on September 23, 2024

FYI, I'm using a quickly-assembled, crappy dataset and a relatively slow cloud GPU machine. Also, I reduced the resolution to 100x100 pixels (256 just takes too long for me). The results look like this after one day of training:

Screen Shot 2019-08-09 at 08 19 23 Screen Shot 2019-08-09 at 08 20 25

Not too bad, but still a lot of room for improvement :)

What I can recommend if you'd like to create a better one:

  • Make sure the two datasets have similar poses / distances to the face. You can tell in mine that the anime data is much more close-up to the face and so the model learned that part of the transformation is "zooming in".
  • Make sure the anime dataset is diverse. Right now, in my model, everything from black men to old women gets transformed into 12-yo-looking girls with giant eyes, white skin, and bangs. I'd really rather it learns something more diverse...
  • Get a serious cloud machine and expect to spend some time. The batch size of 1 is killing me 😅

FYI, I'm using a quickly-assembled, crappy dataset and a relatively slow cloud GPU machine. Also, I reduced the resolution to 100x100 pixels (256 just takes too long for me). The results look like this after one day of training:

Screen Shot 2019-08-09 at 08 19 23 Screen Shot 2019-08-09 at 08 20 25

Not too bad, but still a lot of room for improvement :)

What I can recommend if you'd like to create a better one:

  • Make sure the two datasets have similar poses / distances to the face. You can tell in mine that the anime data is much more close-up to the face and so the model learned that part of the transformation is "zooming in".
  • Make sure the anime dataset is diverse. Right now, in my model, everything from black men to old women gets transformed into 12-yo-looking girls with giant eyes, white skin, and bangs. I'd really rather it learns something more diverse...
  • Get a serious cloud machine and expect to spend some time. The batch size of 1 is killing me 😅

Can you share your training dataset?

Or Pretrained model?

Thanks Very Much!~

This is my email:[email protected]

from ugatit.

td0m avatar td0m commented on September 23, 2024

I have published a pre-trained model for cat2dog on kaggle. Please let me know if you have any issues with it. I saved the results in this pdf so you can see what it looks like:
results.pdf I used the cat2dog dataset from DRIT.

@thewaifuai I'm not sure why but your cat2dog kaggle link doesn't work?

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

I have published a pre-trained model for cat2dog on kaggle. Please let me know if you have any issues with it. I saved the results in this pdf so you can see what it looks like:
results.pdf I used the cat2dog dataset from DRIT.

I am actively working on writing a TPU version of UGATIT. If anyone is interested please respond to my UGATIT TPU issue. I am interested with working with others to make the TPU version.

It takes 4+ days to train cropped face dataset and 16+ days to train cropped body dataset on Nvidia GPUs (estimates). Since it takes many days to train the dataset once, and it takes many iterations of training it will take some time but eventually many people will publish and share their pre-trained models in the weeks to come. Datasets can be found at DRIT. For selfie2anime you can use datasets selfie and anime face dataset. Other potential anime face dataset sources: thiswaifudoesnotexist,
animeGAN for generating anime images and a one click download anime face dataset.
UGATIT is quite general, you really just need a folder of anime faces and a folder of human faces and it figures the rest by itself.

Should the images in trainA and trainB be of same sizes? the selfies are 306x306 but my anime faces were 512x512 mixed pngs and jpgs. I did run into some errors.

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

This is on a 4x P100 with 11 GB VRAM on each trainA is selfie dataset and trainB is http://www.seeprettyface.com/mydataset_page2.html + 1k dump of male anime from gwern's TWDNEv2 website.
image
image

I guess, if I reduce the batch size? then I can quickly train and release the pre-trained models.

from ugatit.

cpury avatar cpury commented on September 23, 2024

@tafseerahmed the size and format of the images shouldn't matter. They get resized anyway AFAIK.

The error you're getting is OOM - out of memory. I believe you don't have enough available RAM (as opposed to GPU memory) to create the model. Is that possible?

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

@tafseerahmed the size and format of the images shouldn't matter. They get resized anyway AFAIK.

The error you're getting is OOM - out of memory. I believe you don't have enough available RAM (as opposed to GPU memory) to create the model. Is that possible?

image

Someone is using 2 GPU's right now but I still have over 256GB of RAM available.

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

@tafseerahmed use the --light True option, if that does not work run pkill python3 and then try again with the --light True option. This runs the light version of UGATIT.

wouldn't that reduce the quality of final results?

from ugatit.

thewaifuai avatar thewaifuai commented on September 23, 2024

@tafseerahmed use the --light True option, if that does not work run pkill python3 and then try again with the --light True option. This runs the light version of UGATIT.

wouldn't that reduce the quality of final results?

Yes

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

@tafseerahmed use the --light True option, if that does not work run pkill python3 and then try again with the --light True option. This runs the light version of UGATIT.

wouldn't that reduce the quality of final results?

Yes

lol thanks its training now
image
but did you train yours on the heavy model instead of light? I imagine the full model requires more than 16GB VRAM

from ugatit.

cpury avatar cpury commented on September 23, 2024

The light version significantly reduces the capacity of the model. I haven't trained for long but I don't think it's worth trying.

With that hardware, you really should not have any memory issues. Maybe the dataset is too big and already takes up most the memory? I don't know but I think you should investigate / experiment more.

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

The light version significantly reduces the capacity of the model. I haven't trained for long but I don't think it's worth trying.

With that hardware, you really should not have any memory issues. Maybe the dataset is too big and already takes up most the memory? I don't know but I think you should investigate / experiment more.

the batch size was set to 1 by default (that's ineffective when you have a GPU), so I can't imagine that the hardware was an issue. I will debug more and let you guys know, in the meantime, I am training on the light model.

from ugatit.

cpury avatar cpury commented on September 23, 2024

Yeah, batch size of 1 is necessary for cycle GANs.

Another thing I've learned: You can increase the speed of your training quite significantly by already providing the right image size. Because otherwise the training procedure will take loads of time just resizing images. Here's what it says in the paper:

All models are trained using Adam [19] with β1=0.5 and β2=0.999. For data augmentation, we flipped the images horizontally with a probability of 0.5, resized them to 286 x 286, and random cropped them to 256 x 256.

I would first use imagemagick to batch-resize all your data to 286x286 or similar. I think that could save you a day or so in training time.

from ugatit.

cpury avatar cpury commented on September 23, 2024

BTW I didn't have any trouble getting the non-light version to run on a machine with much less RAM and only one GPU. So I can only think of two possible ways why it fails for you:

  1. Some configuration issue
  2. Your dataset being too large

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

trainA is selfie dataset with 47k images
trainB is anime dataset with 5K images.

I will try again tonight when more resources are free on the full model.
The config is completely default. I will resize the images and run again, thanks for the tip!

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

Screenshot from 2019-08-09 17-35-03
Okay this is weird, trainA and trainB are both 286x286 and n=5003. I still cant train them on the full model

from ugatit.

cpury avatar cpury commented on September 23, 2024

Some hand-picked results after two days of training on a low-quality dataset: https://twitter.com/cpury123/status/1159844171047301121

results

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

@t04glovern could you please share the data sets?

from ugatit.

cpury avatar cpury commented on September 23, 2024

@t04glovern that's amazing! the structure of the conversions is much more impressive than mine!

from ugatit.

opentld avatar opentld commented on September 23, 2024

This is on a 4x P100 with 11 GB VRAM on each trainA is selfie dataset and trainB is http://www.seeprettyface.com/mydataset_page2.html + 1k dump of male anime from gwern's TWDNEv2 website.

I guess, if I reduce the batch size? then I can quickly train and release the pre-trained models.
I am waiting for your models... @tafseerahmed

from ugatit.

tafseerahmed avatar tafseerahmed commented on September 23, 2024

Here's a couple of mine, just over 18 hours of training (rtx2080ti) on the selfie & anime datasets linked in the other issue.

https://twitter.com/nathangloverAUS/status/1159871270986534913

1
2
3

Could you publish the pre-trained model?

from ugatit.

amberjxd avatar amberjxd commented on September 23, 2024

Alright, I've been training for over a day now and thought I'd check my pre-trained model.

Examples

https://twitter.com/nathangloverAUS/status/1160167218266570752

anime-example

Pre-trained model

https://www.kaggle.com/t04glovern/ugatit-selfie2anime-pretrained

Great! Keep going!

from ugatit.

opentld avatar opentld commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用email注册又验证不过。。。
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人。。。

from ugatit.

xizeyoupan avatar xizeyoupan commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用email注册又验证不过。。。
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人。。。

目前的训练模型效果都不好,再等等吧

from ugatit.

TinySlik avatar TinySlik commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用email注册又验证不过。。。
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人。。。

链接: https://pan.baidu.com/s/13gXM82kgU6yn0NpmlSXY1g 提取码: 3gn5

this env is too big.
img
so i'm blocking when the "Executing transaction: done" end , I guess.

Then, I try the comand get this " ImportError: cannot import name 'prefetch_to_device"
Now , I'm sad...

from ugatit.

suedroplet avatar suedroplet commented on September 23, 2024

Alright, I've been training for over a day now and thought I'd share my pre-trained model.

Video

https://twitter.com/nathangloverAUS/status/1160188181414760449

Examples

https://twitter.com/nathangloverAUS/status/1160167218266570752

anime-example

Pre-trained model

https://www.kaggle.com/t04glovern/ugatit-selfie2anime-pretrained

Hi. Thanks your awesome work! But I got some errors on my computer. When I tried the train command, it raised an error "failed to load the checkpoint". And I got a strange result after running the test command. Could you please tell me why?
image

from ugatit.

td0m avatar td0m commented on September 23, 2024

@suedroplet I had a similar issue, I think it's to do with the checkpoint folder naming. To solve this I trailed the model myself (literally 1 iteration then i quit) just so that it created a checkpoint folder for the dataset. I then replaced all the files in ./checkpoint/<your-generated-dataset-model-folder>/ with the files provided in the pretrained model.

from ugatit.

xizeyoupan avatar xizeyoupan commented on September 23, 2024

Alright, I've been training for over a day now and thought I'd share my pre-trained model.

Video

https://twitter.com/nathangloverAUS/status/1160188181414760449

Examples

https://twitter.com/nathangloverAUS/status/1160167218266570752
anime-example

Pre-trained model

https://www.kaggle.com/t04glovern/ugatit-selfie2anime-pretrained

Hi. Thanks your awesome work! But I got some errors on my computer. When I tried the train command, it raised an error "failed to load the checkpoint". And I got a strange result after running the test command. Could you please tell me why?
image

just add --smoothing true --light true argument

from ugatit.

sdy0803 avatar sdy0803 commented on September 23, 2024

My result applying this pre-trained model is kinda weird. it basically just blurred the original selfie which i put in my testB folder.

here's my command:
python main.py --dataset selfie2anime --phase test --smoothing true --light true

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024

My result applying this pre-trained model is kinda weird. it basically just blurred the original selfie which i put in my testB folder.
image
here's my command:
python main.py --dataset selfie2anime --phase test --smoothing true --light true

Put the selfies in testA. testB is for the Anime you'd like to convert to human

from ugatit.

sdy0803 avatar sdy0803 commented on September 23, 2024

My result applying this pre-trained model is kinda weird. it basically just blurred the original selfie which i put in my testB folder.

here's my command:
python main.py --dataset selfie2anime --phase test --smoothing true --light true

Put the selfies in testA. testB is for the Anime you'd like to convert to human

it works! Thanks!

from ugatit.

sdy0803 avatar sdy0803 commented on September 23, 2024

now i'm looking for another non-light pre-trained model hahaha.

from ugatit.

suedroplet avatar suedroplet commented on September 23, 2024

Alright, I've been training for over a day now and thought I'd share my pre-trained model.

Video

https://twitter.com/nathangloverAUS/status/1160188181414760449

Examples

https://twitter.com/nathangloverAUS/status/1160167218266570752
anime-example

Pre-trained model

https://www.kaggle.com/t04glovern/ugatit-selfie2anime-pretrained

Hi. Thanks your awesome work! But I got some errors on my computer. When I tried the train command, it raised an error "failed to load the checkpoint". And I got a strange result after running the test command. Could you please tell me why?
image

just add --smoothing true --light true argument

It does work. Thanks!

from ugatit.

dpyneo avatar dpyneo commented on September 23, 2024

My result applying this pre-trained model is kinda weird. it basically just blurred the original selfie which i put in my testB folder.
image
here's my command:
python main.py --dataset selfie2anime --phase test --smoothing true --light true

Put the selfies in testA. testB is for the Anime you'd like to convert to human

Hello, I put animation in testB, but the result is like this. What's the matter? Is it saved without training?
image

from ugatit.

WinHGGG avatar WinHGGG commented on September 23, 2024

Hi,
I used Google Colab to train model with datasets https://www.crcv.ucf.edu/data/Selfie/
and http://www.nurs.or.jp/~nagadomi/animeface-character-dataset/ .
But after 4 Epochs(10000 iteraions per epoch),it gave me results like this
ab
ab(1)
Is it trained?
(I'm sure I have put them into right folders)
):

from ugatit.

cqdjsp avatar cqdjsp commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用电子邮件注册又验证不过...
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人...

同样不敢发英文,因为我英文太差 (QAQ)
我总结一下我看到的 数据集 和 预训练模型 :
ps:十分感谢他们的分享

来自heart4lor的 selfie2anime数据集 大小: 约110 MB
通过谷歌下载2*3500张训练集
通过百度下载密码:1exs
据他本人所说此数据集还有改进空间

来自t04glovern的 selfie2anime预训练模型
百度链接来自知乎 密码:50lt
他本人的回答,在他的分支中有resize.py工具来缩小图像大小
kaggle链接
这边提供他的数据集下载地址
人:crcv.ucf.edu/data/Selfie
动漫:gwern.net/Danbooru2018

来自thewaifuai的 cat2dog预训练模型
百度链接 密码:aw35
kaggle链接
或者你可以看看他的回答
cat2dog 数据集百度链接 密码:ryvj
他会把猫变成狗,反之亦然

来自知乎的 selfie2anime数据集
年轻女性-1000张-512px:百度链接 密码:udlm
二次元-1000张-512px:百度链接 密码:d1yg

以下为问答中看到的链接,我不确定他们是否对你有帮助:
http://www.seeprettyface.com/mydataset_page2.html
里面有 人 和 动漫 的高质量数据集,并且是通过 百度下载

这是一条 可以参考的问答 ,他提供了获取数据集的参考

似乎就这么多了
顺便一提,如果你的电脑没有足够的显存,我找到的方法是:
light = True,
调低 iteration 和 epoch?
缩小图像大小?
或者使用预训练模型测试
或者,可以等待更完善的模型?
希望对你有帮助

我训练了一个模型,然后想将其转换为.pb文件形式。不知道您了解这个输入输出节点是什么吗?

from ugatit.

wings0820 avatar wings0820 commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用电子邮件注册又验证不过...
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人...

同样不敢发英文,因为我英文太差 (QAQ)
我总结一下我看到的 数据集 和 预训练模型 :
ps:十分感谢他们的分享
来自heart4lor的 selfie2anime数据集 大小: 约110 MB
通过谷歌下载2*3500张训练集
通过百度下载密码:1exs
据他本人所说此数据集还有改进空间
来自t04glovern的 selfie2anime预训练模型
百度链接来自知乎 密码:50lt
他本人的回答,在他的分支中有resize.py工具来缩小图像大小
kaggle链接
这边提供他的数据集下载地址
人:crcv.ucf.edu/data/Selfie
动漫:gwern.net/Danbooru2018
来自thewaifuai的 cat2dog预训练模型
百度链接 密码:aw35
kaggle链接
或者你可以看看他的回答
cat2dog 数据集百度链接 密码:ryvj
他会把猫变成狗,反之亦然
来自知乎的 selfie2anime数据集
年轻女性-1000张-512px:百度链接 密码:udlm
二次元-1000张-512px:百度链接 密码:d1yg
以下为问答中看到的链接,我不确定他们是否对你有帮助:
http://www.seeprettyface.com/mydataset_page2.html
里面有 人 和 动漫 的高质量数据集,并且是通过 百度下载
这是一条 可以参考的问答 ,他提供了获取数据集的参考
似乎就这么多了
顺便一提,如果你的电脑没有足够的显存,我找到的方法是:
light = True,
调低 iteration 和 epoch?
缩小图像大小?
或者使用预训练模型测试
或者,可以等待更完善的模型?
希望对你有帮助

我训练了一个模型,然后想将其转换为.pb文件形式。不知道您了解这个输入输出节点是什么吗?

你已经训练完成了吗,能分享一下训练好的模型吗,模型各个节点的名字可以用Netron这个软件查看

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024

I've added a Dockerfile + Flask app for performing inference against the model. It can be found at https://github.com/t04glovern/UGATIT

It's just a simple web interface at the moment. I might stand up a GKE + Cloud Run tomorrow to host it until my funds run out.

Screenshot from 2019-08-14 23-54-10

from ugatit.

KR3ND31 avatar KR3ND31 commented on September 23, 2024

image

My samples after 3 days...

Can you upload your model?

from ugatit.

HLearning avatar HLearning commented on September 23, 2024

epochs: 40-70, d_loss=1.2, g_loss=15, Loss stabilizes. Bad result

from ugatit.

chenzeng11 avatar chenzeng11 commented on September 23, 2024

image

My samples after 3 days...

Could you upload your model. I have trained my model for several days, the losses as followes:
Screenshot from 2019-08-15 10-17-35
But results are bad.
collage

from ugatit.

HLearning avatar HLearning commented on September 23, 2024

image
My samples after 3 days...

Could you upload your model. I have trained my model for several days, the losses as followes:
Screenshot from 2019-08-15 10-17-35
But results are bad.
collage

咱俩的训练结果差不多, 微信:544705740交流一下?

from ugatit.

cqdjsp avatar cqdjsp commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用电子邮件注册又验证不过...
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人...

同样不敢发英文,因为我英文太差 (QAQ)
我总结一下我看到的 数据集 和 预训练模型 :
ps:十分感谢他们的分享
来自heart4lor的 selfie2anime数据集 大小: 约110 MB
通过谷歌下载2*3500张训练集
通过百度下载密码:1exs
据他本人所说此数据集还有改进空间
来自t04glovern的 selfie2anime预训练模型
百度链接来自知乎 密码:50lt
他本人的回答,在他的分支中有resize.py工具来缩小图像大小
kaggle链接
这边提供他的数据集下载地址
人:crcv.ucf.edu/data/Selfie
动漫:gwern.net/Danbooru2018
来自thewaifuai的 cat2dog预训练模型
百度链接 密码:aw35
kaggle链接
或者你可以看看他的回答
cat2dog 数据集百度链接 密码:ryvj
他会把猫变成狗,反之亦然
来自知乎的 selfie2anime数据集
年轻女性-1000张-512px:百度链接 密码:udlm
二次元-1000张-512px:百度链接 密码:d1yg
以下为问答中看到的链接,我不确定他们是否对你有帮助:
http://www.seeprettyface.com/mydataset_page2.html
里面有 人 和 动漫 的高质量数据集,并且是通过 百度下载
这是一条 可以参考的问答 ,他提供了获取数据集的参考
似乎就这么多了
顺便一提,如果你的电脑没有足够的显存,我找到的方法是:
light = True,
调低 iteration 和 epoch?
缩小图像大小?
或者使用预训练模型测试
或者,可以等待更完善的模型?
希望对你有帮助

我训练了一个模型,然后想将其转换为.pb文件形式。不知道您了解这个输入输出节点是什么吗?

你已经训练完成了吗,能分享一下训练好的模型吗,模型各个节点的名字可以用Netron这个软件查看

我训练的效果也不忍直视,且迭代次数也没有t04glovern的多。所以建议你直接下载他的吧
image

from ugatit.

hitechbeijing avatar hitechbeijing commented on September 23, 2024

最近不知为啥不能科学上网,kaggle用电子邮件注册又验证不过...
哪位朋友下载了放在百度网盘上共享一下?
不敢发英文,怕给咱们国家的大局域网丢人...

同样不敢发英文,因为我英文太差 (QAQ)
我总结一下我看到的 数据集 和 预训练模型 :
ps:十分感谢他们的分享
来自heart4lor的 selfie2anime数据集 大小: 约110 MB
通过谷歌下载2*3500张训练集
通过百度下载密码:1exs
据他本人所说此数据集还有改进空间
来自t04glovern的 selfie2anime预训练模型
百度链接来自知乎 密码:50lt
他本人的回答,在他的分支中有resize.py工具来缩小图像大小
kaggle链接
这边提供他的数据集下载地址
人:crcv.ucf.edu/data/Selfie
动漫:gwern.net/Danbooru2018
来自thewaifuai的 cat2dog预训练模型
百度链接 密码:aw35
kaggle链接
或者你可以看看他的回答
cat2dog 数据集百度链接 密码:ryvj
他会把猫变成狗,反之亦然
来自知乎的 selfie2anime数据集
年轻女性-1000张-512px:百度链接 密码:udlm
二次元-1000张-512px:百度链接 密码:d1yg
以下为问答中看到的链接,我不确定他们是否对你有帮助:
http://www.seeprettyface.com/mydataset_page2.html
里面有 人 和 动漫 的高质量数据集,并且是通过 百度下载
这是一条 可以参考的问答 ,他提供了获取数据集的参考
似乎就这么多了
顺便一提,如果你的电脑没有足够的显存,我找到的方法是:
light = True,
调低 iteration 和 epoch?
缩小图像大小?
或者使用预训练模型测试
或者,可以等待更完善的模型?
希望对你有帮助

我训练了一个模型,然后想将其转换为.pb文件形式。不知道您了解这个输入输出节点是什么吗?

你已经训练完成了吗,能分享一下训练好的模型吗,模型各个节点的名字可以用Netron这个软件查看

我训练的效果也不忍直视,且迭代次数也没有t04glovern的多。所以建议你直接下载他的吧
image

如果GPU足够给力(单精度浮点运算能力>7TFLOPs,且至少8G显存),建议把epoch调高且关掉light。我是没有硬件,A卡目前不支持深度学习计算。

from ugatit.

QQ2737499951 avatar QQ2737499951 commented on September 23, 2024
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================

Sorry, you can't view or download this file at present.

Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或由很多人一起共享,则可能需要长达 24 小时才能查看或下载该文件。如果 24 小时后您仍然无法访问文件,请与您的域管理员联系。

50 epoch and 100 epoch checkpoints ,Who can share BaiduYun after downloading ? Thank you!
请问有谁能下载后分享到百度云盘吗?谢谢!

QQ2737499951共同研究

from ugatit.

xizeyoupan avatar xizeyoupan commented on September 23, 2024
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================

Sorry, you can't view or download this file at present.

Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或由很多人一起共享,则可能需要长达 24 小时才能查看或下载该文件。如果 24 小时后您仍然无法访问文件,请与您的域管理员联系。

50 epoch and 100 epoch checkpoints ,Who can share BaiduYun after downloading ? Thank you!
请问有谁能下载后分享到百度云盘吗?谢谢!

QQ2737499951共同研究

存到google drive后右键保存副本进行下载

from ugatit.

QQ2737499951 avatar QQ2737499951 commented on September 23, 2024
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================
Sorry, you can't view or download this file at present.
Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或由很多人一起共享,则可能需要长达 24 小时才能查看或下载该文件。如果 24 小时后您仍然无法访问文件,请与您的域管理员联系。

50 epoch and 100 epoch checkpoints ,Who can share BaiduYun after downloading ? Thank you!
请问有谁能下载后分享到百度云盘吗?谢谢!
QQ2737499951共同研究

存到google drive后右键保存副本进行下载

谢谢,但是这个文件估计禁止复制副本了, 我复制副本出错,您能下载一个看看吗?谢谢您qq多少,加个学习一下,谢谢

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024

@taki0112 Thank you for uploading models! I had successfully downloaded them but got the following error when I tried to extract them.

$ unzip ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
Archive:  /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
warning [/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip]:  4294967296 extra bytes at beginning or within zipfile
  (attempting to process anyway)
file #1:  bad zipfile offset (local header sig):  4294967296
  (attempting to re-compensate)
  inflating: checkpoint/.DS_Store    
  inflating: __MACOSX/checkpoint/._.DS_Store  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.meta  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/checkpoint  
  inflating: __MACOSX/checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/._checkpoint  
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.data-00000-of-00001  
  error:  invalid compressed data to inflate
file #12:  bad zipfile offset (local header sig):  651653367
  (attempting to re-compensate)
  inflating: checkpoint/UGATIT_selfie2anime_lsgan_4resblock_6dis_1_1_10_10_1000_sn_smoothing/UGATIT.model-500001.index 

Also zip-file-check failed as following.

$ zip -T ~/Downloads/50_epoch_selfie2anime_checkpoint.zip 
    zip warning: unexpected signature on disk 0 at 672935689

    zip warning: archive not in correct format: /home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip
    zip warning: (try -F to attempt recovery)

zip error: Zip file structure invalid (/home/poly/Downloads/50_epoch_selfie2anime_checkpoint.zip)

Seems like zip files are corrupted?

I believe the issue is to do with downloading from a browser and it not being able to handle the huge file. I found that following this guide I was able to download the zip using curl / wget and then extract it without issues

from ugatit.

xizeyoupan avatar xizeyoupan commented on September 23, 2024
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================
Sorry, you can't view or download this file at present.
Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或由很多人一起共享,则可能需要长达 24 小时才能查看或下载该文件。如果 24 小时后您仍然无法访问文件,请与您的域管理员联系。

50 epoch and 100 epoch checkpoints ,Who can share BaiduYun after downloading ? Thank you!
请问有谁能下载后分享到百度云盘吗?谢谢!
QQ2737499951共同研究

存到google drive后右键保存副本进行下载

谢谢,但是这个文件估计禁止复制副本了, 我复制副本出错,您能下载一个看看吗?谢谢您qq多少,加个学习一下,谢谢

看下这个
主要超4g了,度盘上传麻烦,要分卷。

from ugatit.

wangshisheng avatar wangshisheng commented on September 23, 2024
  1. We released 50 epoch and 100 epoch checkpoints so that people could test more widely.
  2. Also, We published the selfie2anime datasets we used in the paper.
  3. And, we fixed code in smoothing
  4. In the test image, I recommend that your face be in the center.

===================
Sorry, you can't view or download this file at present.
Too many users have recently viewed or downloaded this file. Please try to access this file later. If you try to access a file that is particularly large or shared by many people, it may take up to 24 hours to view or download the file. If you are still unable to access the file after 24 hours, please contact your domain administrator.
抱歉,您目前无法查看或下载此文件。

最近查看或下载此文件的用户过多。请稍后再尝试访问此文件。如果您尝试访问的文件特别大或由很多人一起共享,则可能需要长达 24 小时才能查看或下载该文件。如果 24 小时后您仍然无法访问文件,请与您的域管理员联系。

50 epoch and 100 epoch checkpoints ,Who can share BaiduYun after downloading ? Thank you!
请问有谁能下载后分享到百度云盘吗?谢谢!
QQ2737499951共同研究

存到google drive后右键保存副本进行下载

谢谢,但是这个文件估计禁止复制副本了, 我复制副本出错,您能下载一个看看吗?谢谢您qq多少,加个学习一下,谢谢

看下这个
主要超4g了,度盘上传麻烦,要分卷。

图片数据能否也给个Copy?多谢啦~~

from ugatit.

neuralphene avatar neuralphene commented on September 23, 2024

@t04glovern I have retried using gdown.pl mentioned in stackoverflow, but still got the same error when I extracted them. Still seems zip files are corrupted.

Can you share md5sum of your 100_epoch_selfie2anime_checkpoint.zip?
Mine is as below.

$ md5sum 100_epoch_selfie2anime_checkpoint.zip 
1acedc844eca4605bad41ef049fba401  100_epoch_selfie2anime_checkpoint.zip

I have the same md5 and the same corrupt error. Similarly, I have attempted to download via multiple methods (gdown.pl, wget, the python method mentioned, etc). All have the same result. Seems to be a corrupt zip.

from ugatit.

WinHGGG avatar WinHGGG commented on September 23, 2024

@thorikawa The zip file are not corrupted.
It is just TOO LARGE.
I unpack it in Cent OS and stack overflow.
But in windows it can be unpack.
(Although some strange thing will happen
TIM图片20190815215037

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024

@neuralphene I also unpacked on my Macbook (have yet to retry it on my PopOS system). Sounds like someone above said it worked from them on windows too... so maybe its the way unzip handles the file?

from ugatit.

wings0820 avatar wings0820 commented on September 23, 2024

And now it gave me some FATAL results.
TIM图片20190815215028

And now it gave me some FATAL results.
TIM图片20190815215028

大佬,有国内的链接吗,Google drive下的慢而且容易断 @WinHGGG

from ugatit.

neuralphene avatar neuralphene commented on September 23, 2024

I tried jar xf <zipfile> and got java.io.IOException: Push back buffer is full

I tried 7z x <zipfile> and got Can not open the file as archive

from ugatit.

WinHGGG avatar WinHGGG commented on September 23, 2024

@neuralphene YES,It also happened to me.
may change to windows can help.

from ugatit.

neuralphene avatar neuralphene commented on September 23, 2024

I copied the file to Windows. When I try to Extract All, I receive Windows cannot open the folder.

I installed WinZip and tried to unzip to here and receive Error: central directory not found. One or more files could not be unzipped.

from ugatit.

suedroplet avatar suedroplet commented on September 23, 2024

@t04glovern unzip can't handle .zip file larger than 2G.

from ugatit.

QQ2737499951 avatar QQ2737499951 commented on September 23, 2024

数据集和模型下载

==========================
链接:https://pan.baidu.com/s/1dP1mXuU-rA9dPvFe8YS8jQ
提取码:k6rc

已添加checkpoint文件

from ugatit.

t04glovern avatar t04glovern commented on September 23, 2024

Working great, turned all our prime ministers here Australia into kawaii cuties

https://twitter.com/nathangloverAUS/status/1162038115545931776?s=19

IMG_20190816_003606

from ugatit.

opentld avatar opentld commented on September 23, 2024

Working great, turned all our prime ministers here Australia into kawaii cuties
https://twitter.com/nathangloverAUS/status/1162038115545931776?s=19

great~! is it your trained model or taki0112 model ? @t04glovern

from ugatit.

QQ2737499951 avatar QQ2737499951 commented on September 23, 2024

Please enter new pictures directly, download new pictures online to test, and see how the actual effect is? 请直接输入新的图片,在网上下载新的图来测试,看看实际效果怎样?

from ugatit.

QQ2737499951 avatar QQ2737499951 commented on September 23, 2024

I got this result by using taki's model:

image
===============
Are you win10 or ubuntu? What is the graphics card configuration? Let's talk about your own computer configuration.

My computer is Win10 + 1080ti (11G)
请问您是win10还是ubuntu?显卡配置是怎样的?大家都说一下自己的电脑配置,
我的是 win10+1080ti(11G)

from ugatit.

thorikawa avatar thorikawa commented on September 23, 2024

I switched to macOS, and successfully extracted it (and image2image translation started to work now!). Thank you!

from ugatit.

wangshisheng avatar wangshisheng commented on September 23, 2024

Please enter new pictures directly, download new pictures online to test, and see how the actual effect is? 请直接输入新的图片,在网上下载新的图来测试,看看实际效果怎样?

@QQ2737499951 Good point. I did try lots of selfies of myself and my friends, and in my own experience so far, I would say it's about 20% (1 out of 5) that you can get a relatively good result. 你說的對,我也試了不少自己跟朋友的照片。依照目前我的經驗,大概每嘗試 5 張你能得到 1 張還能看的結果。以下都是新的圖片:

image

确实是这样,看来算法和训练集的数据还有待提升,不过确实已经很牛批了~~

from ugatit.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.