Code Monkey home page Code Monkey logo

styleganex's People

Contributors

endlesssora avatar williamyang1991 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

styleganex's Issues

How did you find the editing_w styles for style transfer?

I tried to apply my styles found through StyleCLIP with shape [18,512] to codes variable in psp forward function, but they don't seem to work in hair/age or inversion (after optimization) networks. Even though generator is standard Stylegan. Seems like first_layer_feats from encoder suppress my StyleCLIP edit. But I see that random styles obtained through mapping network from 512 random vectors work in your example. Can I use StyleCLIP or somehow obtain my own styles?

Traceback error

I installed all req and when I run python app_gradio.py

I get this error:

Traceback (most recent call last):
  File "app_gradio.py", line 9, in <module>
    from webUI.styleganex_model import Model
  File "E:\Ai__Project\StyleGANEX\webUI\styleganex_model.py", line 9, in <module>
    import dlib
  File "C:\Users\ammar\anaconda3\envs\styleganex\lib\site-packages\dlib\__init__.py", line 19, in <module>
    from _dlib_pybind11 import *
ImportError: DLL load failed while importing _dlib_pybind11: The specified module could not be found.

Training code

thanks for your awesome works! Can you please let us know when the training code will be released

Face attribute editing

Awesome work. Vtoonify was amazing, but StyleGANEX is even better.

Screenshot 2023-08-10 at 1 52 51 PM

Especially, I am interested in face attribute editing (video), but I can only test age and hair editing. As shown in the example (open mouse, smile, gender swap, etc), how can I edit the face attributes?

Is training StyleGANEX necessary?
It would be great if you could elaborate how to get different face attribute editing step by step.

Thank you! :)

Video editing output size

Hi fantastic job! I don't understand the output resolution in video editing, it looks like it tracks a single face and zooms in, what would be the best way to return to it's original size in a video editing app could I just do a zoom out x2 and a move x or y or something?

for my example the original size is 1920x1080 and the output is 1920x1632

Problem with launching Gradio on WIndows

I installed everything right and other scripts are working but gradio doesn't work. I'm on Windows
Traceback (most recent call last):
File "app_gradio.py", line 97, in
main()
File "app_gradio.py", line 75, in main
create_demo_inversion(model.process_inversion, allow_optimization=True)
File "X:\StyleGANEX\webUI\app_task.py", line 313, in create_demo_inversion
api_name='inversion')
File "X:\StyleGANEX\venv\lib\site-packages\gradio\events.py", line 157, in call
trigger_only_on_success=self.trigger_only_on_success,
File "X:\StyleGANEX\venv\lib\site-packages\gradio\blocks.py", line 225, in set_event_trigger
check_function_inputs_match(fn, inputs, inputs_as_dict)
File "X:\StyleGANEX\venv\lib\site-packages\gradio\utils.py", line 749, in check_function_inputs_match
parameter_types = get_type_hints(fn)
File "X:\StyleGANEX\venv\lib\site-packages\gradio\utils.py", line 704, in get_type_hints
return typing.get_type_hints(fn)
File "X:\StyleGANEX\venv\lib\typing.py", line 1013, in get_type_hints
value = _eval_type(value, globalns, localns)
File "X:\StyleGANEX\venv\lib\typing.py", line 263, in _eval_type
return t._evaluate(globalns, localns)
File "X:\StyleGANEX\venv\lib\typing.py", line 467, in _evaluate
eval(self.forward_code, globalns, localns),
File "", line 1, in
NameError: name 'file' is not defined

If it helps some file are redownloaded whenever I try to launch Gradio
100%|█████████████████████████████████████████████████████████████████████████████| 17.5k/17.5k [00:00<00:00, 1.05MB/s]
100%|█████████████████████████████████████████████████████████████████████████████| 4.01M/4.01M [00:02<00:00, 1.58MB/s]
100%|████████████████████████████████████████████████████████████████████████████████| 174k/174k [00:00<00:00, 628kB/s]
100%|█████████████████████████████████████████████████████████████████████████████| 4.11k/4.11k [00:00<00:00, 1.41MB/s]
100%|██████████████████████████████████████████████████████████████████████████████| 42.9k/42.9k [00:00<00:00, 350kB/s]
100%|██████████████████████████████████████████████████████████████████████████████| 74.1k/74.1k [00:00<00:00, 489kB/s]
100%|█████████████████████████████████████████████████████████████████████████████| 1.04M/1.04M [00:00<00:00, 1.26MB/s]
100%|███████████████████████████████████████████████████████████████████████████████| 901k/901k [00:00<00:00, 1.26MB/s]

These are the redownloads

inversion

Hi, I'm interested in your work, Here are my inversion results, the inversion is a bit fuzzy (blocky pixels), Is this normal?
00000
00000_inversion

00001
00001_inversion

Download failed - no file

Hi. I can't seem to download a video after it has been made on web ui. I get "Download failed - no file" either if I try to save it as html or mp4 file.

Why does the 7-th layer of stylegan2 have the resolution as same as 32x32.

In the paper, the author wrote as follows:
"In Fig. 2(e), the first-layer feature fails to provide enough spatial information for a valid rotation. In comparison, the 7-th layer has a higher resolution (32 × 32), making it better suited for capturing spatial information."
As long as I know, resolution 32x32 is for 4th layer and resolution 256x256 is for 7th layer.

about scale_factor

Dear Williamyang1991:

thanks for your nice work.

I want to generate the video with younger face, is the following command correct?

python video_editing.py --ckpt pretrained_models\styleganex_edit_age.pt --data_path data\2.mp4 --scale_factor 2

what's the real meaning of scale_factor?

thanks.

Video face editing

Did you train video face editing (e.g black hair) on aligned data?
According to the paper, it uses a synthetic dataset from Stylegan2 or did I miss something?
(we can simply generate x and y from random latent code w+ with StyleGAN G_0)

Unexpected end of JSON input

Getting Unexpected end of JSON input error in google colab when Number of frames to toonify set to 1000. Also, can't toonify more than 3 seconds of length of a video.

Ajust Age

Firstly thank you for the wonderful project.

But I am stuck when running it to Age editing, I can't find a param to change the age
Can you explain which i param will make an object in the image older than
Thank you
Screenshot 2023-12-25 at 14 33 38

SyntaxError: 'return' outside function`

When running:

python video_editing.py --ckpt STYLEGANEX_MODEL_PATH --data_path FACE_INPUT_PATH

I get:

File "/content/StyleGANEX/video_editing.py", line 81 return ^ SyntaxError: 'return' outside function

Some confusion

What is the function of editing_w?is it train from different age data ?

or styleganex_edit_age.pt is it train from different age data ?

how to learn age change, What is the principle, if want to learn model use my own different age data , What should I do

Image edit

Great job! I have some questions about the code. If I edit images, do I need to do the following? Will removal have an impact on the results?
微信图片_20231020154958

Edit Vector

Thanks for awesome work!.
How can i obtain a attribute edit vector (exp: smile, glasses, ...)

Video output size

I generated output video using video_editing.py, but the output size is different from the input resolution. (The bottom part is cut)
The input is portrait video, and the head is not located at the center. Something like this.
pretty-smiling-joyfully-female-with-fair-hair-dressed-casually-looking-with-satisfaction

Is there any way to get the same resolution to the original input? it would be great if you can point out the lines needed to be changed...

Always thank you,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.