Comments (3)
Can you help build such a notebook?
from fresco.
Sure once I can figure out how to get run_fresco.py to work -- see separate issue
from fresco.
A notebook would be great Im trying to run it on google colab now and getting multiple errors.
Edit: ok got it working out the box on colab just had to change a few things..
The style transfer is super crisp!
Batch size 4 in config
Requirements
!pip install diffusers
!pip install gradio
!pip install numba
!pip install imageio-ffmpeg
!pip install transformers
!pip install torchvision
!pip install opencv-python
!pip install einops
!pip install matplotlib
!pip install timm
!pip install av
!pip install basicsr
!pip install transformers
!pip install opencv-contrib-python
!pip install einops
!pip install matplotlib
!pip install accelerate
!python /content/FRESCO/install.py
To use webui.py change line 358 to this
#source='upload',
Change to this in src/freelunchutils
from typing import Any, Dict, Optional, Tuple
import torch
import torch.fft as fft
from diffusers.utils import is_torch_version
#from diffusers.models.unet_2d_condition import logger as logger2d
from diffusers.models.unets.unet_2d_condition import UNet2DConditionOutput, UNet2DConditionModel
from diffusers.models.unets.unet_3d_condition import UNet3DConditionOutput, UNet3DConditionModel
#from diffusers.models import unet_3d_condition
#from unet_3d_condition import logger as logger3d
Change /usr/local/lib/python3.10/dist-packages/diffusers/models/init.py to this
#@title replace above with
Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
from typing import TYPE_CHECKING
from ..utils import (
DIFFUSERS_SLOW_IMPORT,
_LazyModule,
is_flax_available,
is_torch_available,
)
_import_structure = {}
if is_torch_available():
_import_structure["adapter"] = ["MultiAdapter", "T2IAdapter"]
_import_structure["autoencoders.autoencoder_asym_kl"] = ["AsymmetricAutoencoderKL"]
_import_structure["autoencoders.autoencoder_kl"] = ["AutoencoderKL"]
_import_structure["autoencoders.autoencoder_kl_temporal_decoder"] = ["AutoencoderKLTemporalDecoder"]
_import_structure["autoencoders.autoencoder_tiny"] = ["AutoencoderTiny"]
_import_structure["autoencoders.consistency_decoder_vae"] = ["ConsistencyDecoderVAE"]
_import_structure["controlnet"] = ["ControlNetModel"]
_import_structure["dual_transformer_2d"] = ["DualTransformer2DModel"]
_import_structure["embeddings"] = ["ImageProjection"]
_import_structure["modeling_utils"] = ["ModelMixin"]
_import_structure["transformers.prior_transformer"] = ["PriorTransformer"]
_import_structure["transformers.t5_film_transformer"] = ["T5FilmDecoder"]
_import_structure["transformers.transformer_2d"] = ["Transformer2DModel"]
_import_structure["transformers.transformer_temporal"] = ["TransformerTemporalModel"]
_import_structure["unets.unet_1d"] = ["UNet1DModel"]
_import_structure["unets.unet_2d"] = ["UNet2DModel"]
_import_structure["unets.unet_2d_condition"] = ["UNet2DConditionModel"]
_import_structure["unets.unet_3d_condition"] = ["UNet3DConditionModel"]
_import_structure["unets.unet_i2vgen_xl"] = ["I2VGenXLUNet"]
_import_structure["unets.unet_kandinsky3"] = ["Kandinsky3UNet"]
_import_structure["unets.unet_motion_model"] = ["MotionAdapter", "UNetMotionModel"]
_import_structure["unets.unet_spatio_temporal_condition"] = ["UNetSpatioTemporalConditionModel"]
_import_structure["unets.unet_stable_cascade"] = ["StableCascadeUNet"]
_import_structure["unets.uvit_2d"] = ["UVit2DModel"]
_import_structure["vq_model"] = ["VQModel"]
if is_flax_available():
_import_structure["controlnet_flax"] = ["FlaxControlNetModel"]
_import_structure["unets.unet_2d_condition_flax"] = ["FlaxUNet2DConditionModel"]
_import_structure["vae_flax"] = ["FlaxAutoencoderKL"]
if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT:
if is_torch_available():
from .adapter import MultiAdapter, T2IAdapter
from .autoencoders import (
AsymmetricAutoencoderKL,
AutoencoderKL,
AutoencoderKLTemporalDecoder,
AutoencoderTiny,
ConsistencyDecoderVAE,
)
from .controlnet import ControlNetModel
from .embeddings import ImageProjection
from .modeling_utils import ModelMixin
from .transformers import (
DualTransformer2DModel,
PriorTransformer,
T5FilmDecoder,
Transformer2DModel,
TransformerTemporalModel,
)
from .unets import (
I2VGenXLUNet,
Kandinsky3UNet,
MotionAdapter,
StableCascadeUNet,
UNet1DModel,
UNet2DConditionModel,
UNet2DModel,
UNet3DConditionModel,
UNetMotionModel,
UNetSpatioTemporalConditionModel,
UVit2DModel,
)
from .vq_model import VQModel
if is_flax_available():
from .controlnet_flax import FlaxControlNetModel
from .unets import FlaxUNet2DConditionModel
from .vae_flax import FlaxAutoencoderKL
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
from fresco.
Related Issues (20)
- 视频大于1M ,24GPU会爆吗?还是只有我是这样的? HOT 4
- TypeError: Video.__init__() got an unexpected keyword argument 'source' HOT 3
- Enable support to use lightning model
- ImportError: cannot import name 'logger' from 'diffusers.models.unet_2d_condition' (/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py) HOT 1
- ebsynth generation fails HOT 7
- run on Colab with A100 GPU and 40G GPU RAM still got Error HOT 6
- run webUI.py with a 3 seconds video of 1280x720, got Error HOT 1
- [Run Key Frames] sucess, but [Run Propagation (Ebsynth)] failed: FileNotFoundError: [Errno 2] No such file or directory: 'output/480x480/blend.mp4' HOT 1
- Got RuntimeError:: element 0 of tensors does not require grad and does not have a grad_fn HOT 1
- python video_blend.py Cannot run HOT 3
- Does it support LCM models? HOT 6
- Diffusers HOT 2
- Video blend too slow: 18 vCPU / RTX 3090(24GB) HOT 1
- can not find target file while executing process_seq(video_sequence, i, blend_histogram, blend_gradient) in video_blend.py HOT 21
- blender not found
- Meaning of unet_chunk_size in FRESCOAttnProcessor2_0 codes? HOT 2
- Standalone video_blend usage instructions unclear HOT 1
- video_blend error HOT 12
- bad result of white cat
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fresco.