lonicamewinsky / frame2frame Goto Github PK
View Code? Open in Web Editor NEWAutomatic1111 Stable Diffusion WebUI extension, generates img2img against frames in an animation
License: MIT License
Automatic1111 Stable Diffusion WebUI extension, generates img2img against frames in an animation
License: MIT License
I'm working on gif2gif right now, and as you stated previously it will be dropped soon in favor of this plugin so I'll drop this here.
I have a scene where I'm trying to add an appendage to the character, but would like the rest of the characters movement to stay the same as the source video. The most efficient way to do this is to generate a png output folder of the openpose preprocessing of the character and then run it one by one through img2img manually because there is no way to run a gif of an open net preprocess through with gif2gif. Is it at all possible to put a custom gif of the open pose output into the workflow? Might be asking too much, but better to ask and hear no, than to never ask at all!
Hello, I know you're just starting on this (I found it on a github search by recently updated Automatic1111 extensions). But due to that, I figured maybe you're looking for feedback. I installed using the URL, put a 15sec mp4 in the box (loaded fine), added a prompt, but got the following error when clicking generate. It could very well be my setup or something wrong w/ my personal installation/extensions/etc. I have had occasional crashes the past few weeks. Good luck and thanks for continuing to push the bar w/ SD via your work.
===
Restarting UI...
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Loading weights [ca13540e36] from e:\Stable Diffusion Checkpoints\2023-03-04 - Topnotch 66 - (Flip half) - 40img - (Flip half) - 3000 steps (of 5000 max).ckpt
Loading VAE weights specified in settings: C:\stable-diffusion-webui\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying xformers cross attention optimization.
Weights loaded in 1.2s (load weights from disk: 0.7s, apply weights to model: 0.1s, load VAE: 0.1s, move model to device: 0.2s).
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
Will process 1 animation(s) with 3430 total generations.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 1 images in a total of 1 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:09<00:00, 2.85it/s]
Error completing request | 28/96040 [00:09<9:17:00, 2.87it/s]
Arguments: ('task(mzs0ni3fgdfr7pk)', 0, 'topnotch artstyle ', '', [], <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x222C1B27A30>, None, {'image': <PIL.Image.Image image mode=RGBA size=1920x1080 at 0x222C1B25FF0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=1920x1080 at 0x222C1B278E0>}, None, None, None, None, 36, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 1080, 1920, 0, 0, 32, 0, '', '', '', [], 11, False, '', 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, <tempfile._TemporaryFileWrapper object at 0x00000222C1B264A0>, True, True, True, '29', '3430', True, True, True, 0, 0.1, 1, 'None', False, 0, 2, 512, 512, False, None, None, None, 50, False, 4.0, '', 10.0, 'Linear', 3, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}ERROR:asyncio:Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 162, in _call_connection_lost
self._sock.shutdown(socket.SHUT_RDWR)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
Will process 1 animation(s) with 459 total generations.
INFO:sd_dynamic_prompts.dynamic_prompting:Prompt matrix will create 1 images in a total of 1 batches.
100%|██████████████████████████████████████████████████████████████████████████████████| 28/28 [00:02<00:00, 13.42it/s]
Error completing request | 55/96040 [00:55<3:42:01, 7.21it/s]
Arguments: ('task(xjtsx4lighawizg)', 0, 'topnotch artstyle ', '', [], <PIL.Image.Image image mode=RGBA size=960x720 at 0x222C178CFA0>, None, {'image': <PIL.Image.Image image mode=RGBA size=960x720 at 0x222C178F7C0>, 'mask': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=960x720 at 0x222C178F910>}, None, None, None, None, 36, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 720, 960, 0, 0, 32, 0, '', '', '', [], 11, False, '', 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, 7, 100, 'Constant', 0, 'Constant', 0, 4, True, True, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, False, 'none', 'None', 1, None, False, 'Scale to Fit (Inner Fit)', False, False, 64, 64, 64, 0, 1, False, '
CFG Scale
should be 2 or lower.Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8
', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', 'Will upscale the image by the selected scale factor; use width and height sliders to set tile size
', 64, 0, 2, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Blur First V1', 0.25, 10, 10, 10, 10, 1, False, '', '', 0.5, 1, False, <tempfile._TemporaryFileWrapper object at 0x00000222C178F070>, True, True, True, '29', '459', True, True, True, 0, 0.1, 1, 'None', False, 0, 2, 512, 512, False, None, None, None, 50, False, 4.0, '', 10.0, 'Linear', 3, False, True, 30.0, True, False, False, 0, 0.0, 'Lanczos', 1, True, 0, 0, 'Will upscale the image depending on the selected target size type
', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048, 2) {}My IMG2IMG has been breaking w/ the below error sporadically for the past few days. I hadn't narrowed it down to a specific extension as I just kept disabling a bunch and eventually it would resolve. However, I did narrow it does to F2F today as I'm getting the error below if it's enabled, but if I disable it and restart I don't get the error.
I -think- what's going is that if you close F2F w/o clicking the [x] on the video (and removing it) that can cause problems. What's bad is that this error occurs even if F2F isn't selected in the script box and after a restart of the whole webui.
I've disabled F2F for now as I don't have a use for it and I know it's still a work in prog, but I definitely wanted to pass this along. If it's something related to my specific setup my apologies:
Closing server running on port: 7860
Restarting UI...
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Startup time: 377.6s (import gradio: 0.9s, import ldm: 0.3s, other imports: 0.8s, list extensions: 0.4s, list SD models: 0.7s, load scripts: 0.7s, load SD checkpoint: 3.1s, scripts before_ui_callback: 361.6s, create ui: 8.6s, gradio launch: 0.3s, scripts app_started_callback: 0.2s).
Traceback (most recent call last):
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1013, in process_api
inputs = self.preprocess_data(fn_index, inputs, state)
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 911, in preprocess_data
processed_input.append(block.preprocess(inputs[i]))
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 540, in preprocess
return self._round_to_precision(x, self.precision)
File "C:\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 502, in _round_to_precision
return float(num)
ValueError: could not convert string to float: ''
Not an issue per say however each time Automatic1111 is launched, it seems the requirement of opencv-python attempts to reinstall. Would like to explore any suggestions on how to get this requirement installed completely.
Hey again,
One minor UI issue I noticed playing around w/ "frame2frame" earlier. If you use the slider a bit in the same session (or slide, jump, slide, jump) it has a tendency to loses its place and come up with an inaccurate FPS value. I used OBS to cap a quick example (vid below). Minor, but wanted to mention. Thanks!
(am on the latest commit of both extension & a1111) / The MP4 in the sample was rendered out of Vegas at standard NTSC 29.97 fps
When trying to use the script on the img2img tab I get this error.
(And I also get the same error with your gif2gif extension right now)
Traceback (most recent call last):
File "D:\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "D:\stable-diffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\stable-diffusion\stable-diffusion-webui\modules\img2img.py", line 169, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "D:\stable-diffusion\stable-diffusion-webui\modules\scripts.py", line 399, in run
processed = script.run(p, *script_args)
File "D:\stable-diffusion\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 316, in run
generated_frames.append(generate_frame(frame))
File "D:\stable-diffusion\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 253, in generate_frame
cn_layers = cnet.get_all_units_in_processing(orig_p)
File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 94, in get_all_units_in_processing
return get_all_units(p.scripts, p.script_args)
File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 105, in get_all_units
return get_all_units_from(script_args[cn_script.args_from:cn_script.args_to])
File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 125, in get_all_units_from
units.append(to_processing_unit(script_args[i]))
File "D:\stable-diffusion\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\external_code.py", line 181, in to_processing_unit
assert isinstance(unit, ControlNetUnit), f'bad argument to controlnet extension: {unit}\nexpected Union[dict[str, Any], ControlNetUnit]'
AssertionError: bad argument to controlnet extension: <scripts.external_code.ControlNetUnit object at 0x0000014C1BDBE080>
expected Union[dict[str, Any], ControlNetUnit]
WebUI commit: a9fed7c364061ae6efb37f797b6b522cb3cf7aa2
ControlNet commit: 8682c1e49728926bb7cc7753da8917d1ab095fb1
I have a problem loading the script and can´t figure out a solution for that.
Do you have an idea what the problem is and how to solve it?
ERROR:
Error loading script: frame2frame.py
Traceback (most recent call last):
File "C:\Users\Admin\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\Admin\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\Admin\stable-diffusion-webui\extensions\frame2frame\scripts\frame2frame.py", line 14, in
from moviepy.editor import VideoFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\editor.py", line 36, in
from .video.io.VideoFileClip import VideoFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\video\io\VideoFileClip.py", line 3, in
from moviepy.audio.io.AudioFileClip import AudioFileClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\audio\io\AudioFileClip.py", line 3, in
from moviepy.audio.AudioClip import AudioClip
File "C:\Users\Admin\stable-diffusion-webui\venv\lib\site-packages\moviepy\audio\AudioClip.py", line 4, in
import proglog
ModuleNotFoundError: No module named 'proglog'
Hey, couldn't find a way to reach out to you directly so I'm throwing it here.
I have figured out a new way to generate videos in SD that has great temporal stability, you can see an example of the output here:
https://i.imgur.com/H4KtklP.gif
The process is pretty simple, and I'd like to release it as an extension, but I don't do python. It seems like you have most of the pieces needed across your projects so I was curious if you might like to collaborate on adding this to your project or releasing it as a separate extension.
I don't really want to publicly disclose the secret sauce before releasing it, so if you'd like to discuss it you can reach me at [email protected]
Hey!
Just wanted to say thanks again for developing this. It's been 6months since and update and it doesn't seem like the sister extension (GIF2GIF) has had a recent update either. It had continued to work for a long while, butI don't think it currently loads properly on A1111 1.6. I guess I'm just wondering if the developer(s) have been busy or if this project is essentially done for good.
I really enjoyed it when it worked, and it'd be cool to have it as an option to use w/ SDXL along w/ Controlnet and the new IP-Adapter model.
Hope all is well!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.