Code Monkey home page Code Monkey logo

disco-diffusion's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disco-diffusion's Issues

Docker Install Error

Following the docker install instructions I get this error. Any idea how to fix this? I already downloaded the file elsewhere in another setup... should I just put that file where it belongs? Where does it belong? Hmm. Perhaps it would be better to update the docker file with a valid download URL?:

--2022-05-11 17:40:28-- https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt
Resolving v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)... 52.92.178.218
Connecting to v-diffusion.s3.us-west-2.amazonaws.com (v-diffusion.s3.us-west-2.amazonaws.com)|52.92.178.218|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2022-05-11 17:40:29 ERROR 404: Not Found.

The command '/bin/sh -c wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt' returned a non-zero code: 8

System: Host: BeyondPC Kernel: 5.4.0-109-generic x86_64 bits: 64 Desktop: Cinnamon 5.2.7 Distro: Linux Mint 20.3 Una
Machine: Type: Desktop Mobo: ASUSTeK model: ROG CROSSHAIR VI HERO (WI-FI AC) v: Rev 1.xx serial: <superuser/root required>
UEFI: American Megatrends v: 7901 date: 07/31/2020
CPU: Topology: 6-Core model: AMD Ryzen 5 2600 bits: 64 type: MT MCP L2 cache: 3072 KiB
Speed: 1270 MHz min/max: 1550/3400 MHz Core speeds (MHz): 1: 1270 2: 1270 3: 1352 4: 2465 5: 2789 6: 1423 7: 1301
8: 1822 9: 1269 10: 1268 11: 2466 12: 1471
Graphics: Device-1: NVIDIA GP104 [GeForce GTX 1070 Ti] driver: nvidia v: 510.60.02
Display: x11 server: X.Org 1.20.13 driver: nvidia unloaded: fbdev,modesetting,nouveau,vesa tty: N/A
OpenGL: renderer: NVIDIA GeForce GTX 1070 Ti/PCIe/SSE2 v: 4.6.0 NVIDIA 510.60.02
Audio: Device-1: NVIDIA GP104 High Definition Audio driver: snd_hda_intel
Device-2: Advanced Micro Devices [AMD] Family 17h HD Audio driver: snd_hda_intel
Sound Server: ALSA v: k5.4.0-109-generic
Network: Device-1: Intel I211 Gigabit Network driver: igb
IF: enp5s0 state: up speed: 1000 Mbps duplex: full mac: 0c:9d:92:7e:93:a6
Device-2: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter driver: ath10k_pci
IF: wlp6s0 state: down mac: e8:d1:1b:99:c7:65
IF-ID-1: docker0 state: down mac: 02:42:13:ea:55:6a
Drives: Local Storage: total: 9.32 TiB used: 2.81 TiB (30.2%)
ID-1: /dev/nvme0n1 vendor: Kingston model: RBUSNS8154P3512GJ1 size: 476.94 GiB
ID-2: /dev/sda vendor: Seagate model: ST4000DM004-2CV104 size: 3.64 TiB
ID-3: /dev/sdb vendor: Kingston model: SV300S37A480G size: 447.13 GiB
ID-4: /dev/sdc vendor: Silicon Power model: SPCC Solid State Disk size: 238.47 GiB
ID-5: /dev/sdd vendor: Seagate model: ST5000DM000-1FK178 size: 4.55 TiB
Partition: ID-1: / size: 467.96 GiB used: 110.64 GiB (23.6%) fs: ext4 dev: /dev/nvme0n1p2
Sensors: System Temperatures: cpu: 37.9 C mobo: N/A gpu: nvidia temp: 52 C
Fan Speeds (RPM): N/A gpu: nvidia fan: 0%
Info: Processes: 389 Uptime: 1d 15h 44m Memory: 31.35 GiB used: 10.72 GiB (34.2%) Shell: bash inxi: 3.0.38

ModuleNotFoundError: No module named 'py3d_tools'

Hi, there is an issue from "1.5 Define necessary functions"

ModuleNotFoundError Traceback (most recent call last)
in ()
4
5 import pytorch3d.transforms as p3dT
----> 6 import disco_xform_utils as dxf
7
8 def interp(t):

/content/disco_xform_utils.py in ()
1 import torch, torchvision
----> 2 import py3d_tools as p3d
3 import midas_utils
4 from PIL import Image
5 import numpy as np

ModuleNotFoundError: No module named 'py3d_tools'

RuntimeError: CUDA error

Hi there,

I am running a CO Pro + subsciption.
But after the run I received the following error :
What do I need to do to fix this ?
Thank you

Starting Run: TimeToDisco(5) at frame 0
Prepping model...

RuntimeError Traceback (most recent call last)
in ()
161 model, diffusion = create_model_and_diffusion(**model_config)
162 model.load_state_dict(torch.load(f'{model_path}/{diffusion_model}.pt', map_location='cpu'))
--> 163 model.requires_grad_(False).eval().to(device)
164 for name, param in model.named_parameters():
165 if 'qkv' in name or 'norm' in name or 'proj' in name:

4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in convert(t)
903 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None,
904 non_blocking, memory_format=convert_to_format)
--> 905 return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
906
907 return self._apply(convert)

RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Controlling object uniformity

While the framework is great in generating complex landscape, desert, trees, building, etc. I'm looking to control how it generate single object like a car and limit the craziness of the output.

I liked the background, it's even realistic but the car is way unrealistic, how can I fix that ?

car1

Impossible to render 4K image?!

I'm trying to render ~ 4K image without any luck ! [3840, 2176] ~

I tried powerful devices like A100 with 80GB RAM & RTX A6000 with 48GB RAM they all fail , I tried reducing the cutn_batches to 1 , and other settings, still fail ? how come ?

Error will be

CUDA error: an illegal memory access was encountered

OR

CUDA out of memory

s3 bucket v-diffusion doesn't exist

The prep Dockerfile makes two calls to the s3 bucket v-diffusion, but that bucket doesn't exist.

image

The calls are:

RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://v-diffusion.s3.us-west-2.amazonaws.com/512x512_diffusion_uncond_finetune_008100.pt

and

RUN wget --no-directories --progress=bar:force:noscroll -P /scratch/models https://v-diffusion.s3.us-west-2.amazonaws.com/secondary_model_imagenet_2.pth

on lines 12 and 14 respectively.

To test that the bucket doesn't exist, I went to my aws account and created the bucket, which I then deleted.
image

I think the solution would be to have the correct bucket name.

ddim_sample_loop_progressive() got an unexpected keyword argument 'transformation_fn'

Running 4. Duffise:

Requirement already satisfied: frozenlist>=1.1.1 in c:\programdata\anaconda3\envs\py37\lib\site-packages (from aiohttp->fsspec[http]!=2021.06.0,>=2021.05.0->pytorch-lightning) (1.3.0)

disco_xform_utils.py failed to import InferenceHelper. Please ensure that AdaBins directory is in the path (i.e. via sys.path.append('./AdaBins') or other means).
Using device: cuda:0
512 Model already downloaded, check check_model_SHA if the file is corrupt
Secondary Model already downloaded, check check_model_SHA if the file is corrupt
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: C:\ProgramData\Anaconda3\envs\py37\lib\site-packages\lpips\weights\v0.1\vgg.pth
Starting Run: TimeToDisco(0) at frame 0
Prepping model...
range(0, 1)
None
Frame 0 Prompt: ['A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.', 'yellow color scheme']
Batches: 0%| | 0/50 [00:00<?, ?it/s]

Output()
Seed used: 752449851
Traceback (most recent call last):
File "D:\Program Files\JetBrains\PyCharm Community Edition 2021.2.3\plugins\python-ce\helpers\pycharm\docrunner.py", line 305, in
modules = [loadSource(a[0])]
File "D:\Program Files\JetBrains\PyCharm Community Edition 2021.2.3\plugins\python-ce\helpers\pycharm\docrunner.py", line 237, in loadSource
module = _load_file(moduleName, fileName)
File "D:\Program Files\JetBrains\PyCharm Community Edition 2021.2.3\plugins\python-ce\helpers\pycharm\docrunner.py", line 209, in _load_file
return machinery.SourceFileLoader(moduleName, fileName).load_module()
File "", line 407, in _check_name_wrapper
File "", line 907, in load_module
File "", line 732, in load_module
File "", line 265, in _load_module_shim
File "", line 696, in _load
File "", line 677, in _load_unlocked
File "", line 728, in exec_module
File "", line 219, in _call_with_frames_removed
File "D:/WorkStation/[AI_Art]/disco-diffusion/disco.py", line 2607, in
do_run()
File "D:/WorkStation/[AI_Art]/disco-diffusion/disco.py", line 1350, in do_run
transformation_percent=args.transformation_percent
TypeError: ddim_sample_loop_progressive() got an unexpected keyword argument 'transformation_fn'

Process finished with exit code 1

Resume from latest if there is no latest

I have enabled resume_run and set resume_from_frame to latest. I would expect that if there is no "latest" image in the directory yet, it would start normally from scratch. But it will start from -1.

Then in the output directory there are MyBatchName(-1)_0000.png files instead of the correct MyBatchName(0)_0000.png.

PS: I can fix it and send a pull request, just let me know if it makes sense.

New OpenCLIP Settings work ONLY during same run that open_clip package is first installed.

I just started using DD 5.6 (after using 5.4 for while) running locally on Ubuntu 18.04, and everything usually works fine, but the first time I tried using the new OpenCLIP settings, it crashed giving the error:

AttributeError: module 'open_clip' has no attribute 'create_model'

Although, when looking through the source I can see clearly that the module open_clip does indeed have an attribute create_model, imported in the init from factory etc.. I heard on the discord someone mention that they had the same error on Colab, and they found that either just restarting the runtime, or re-installing, or attaching wiped storage fixed it.

Through experimentation I discovered that I can make this error go away by deleting the open_clip directory and forcing DD to re-install it on the next run, and for that run only it works with OpenCLIP settings. But the next run, it breaks again with the same error. This happens every time. I have to delete the open_clip directory in order to get those settings to work. Thankfully DD does seem to be able to batch multiple images without crashing on the second or subsequent ones, so I am hoping it will also do an animation, without breaking on subsequent images during the same run - I will test that soon and update this issue with the results.

UPDATE 7/15/22: Animations do work.

So, for now, I have to delete the open_clip folder again, and let it install it again on each run in order to get it to work with these new toys. This problem also goes away if I set all of the OpenCLIP settings to False, but that defeats the purpose, I want to be able to use them. OpenCLIP re-installs quickly, and doesn't have to re-download any of the large files that it did the first time I set each setting to True, so I don't mind that much, but I thought it was important to report.

As a temporary solution, I have just changed my command line to run DD to be:

rm -Rf open_clip/; python -m disco.py

That works, but there has to be a better way.

PS - Thank you so much for releasing Disco Diffusion!

Zooming to specific pixel

I'm trying to write a little code that steers the zoom into a specific pixel, but I can't quite figure out how x,y translate is behaving. For example, if i am using a [768,512] image, and I want to zoom into 100,200 over 30 frames, how do I calculate the translate and zoom numbers needed to do this. Sorry if this is the wrong place to ask, I've been stuck for awhile and am not getting much help on Discord/Reddit. Any guidence would be incredible, thanks so much!

VR Mode skipping frames

Tried the new VR mode in 5.2-- ran a large batch and noticed the frame files had many randomly missing frames, making stitching impossible--- started after frame 10 so wonder if it's turbo related [update 4/11/22 5:47pm tried turbo off, no frame file name skipping]-thinking this must be some kind of bug...


{
    "text_prompts": {
        "0": [
            "Cybernetic Xenomorph by H.R. Giger, highly detailed rendering in the style of colorful visionary art"
        ]
    },
    "image_prompts": {},
    "clip_guidance_scale": 51000,
    "tv_scale": 75,
    "range_scale": 1666,
    "sat_scale": 90000,
    "cutn_batches": 1,
    "max_frames": 10000,
    "interp_spline": "Linear",
    "init_image": null,
    "init_scale": 1000,
    "skip_steps": 10,
    "frames_scale": 1500,
    "frames_skip_steps": "60%",
    "perlin_init": false,
    "perlin_mode": "mixed",
    "skip_augs": false,
    "randomize_class": true,
    "clip_denoised": false,
    "clamp_grad": true,
    "clamp_max": 0.05,
    "seed": 1436827437,
    "fuzzy_prompt": false,
    "rand_mag": 0.05,
    "eta": 0.8,
    "width": 832,
    "height": 448,
    "diffusion_model": "512x512_diffusion_uncond_finetune_008100",
    "use_secondary_model": true,
    "steps": 100,
    "diffusion_steps": 1000,
    "diffusion_sampling_mode": "ddim",
    "ViTB32": true,
    "ViTB16": true,
    "ViTL14": false,
    "RN101": false,
    "RN50": true,
    "RN50x4": false,
    "RN50x16": false,
    "RN50x64": false,
    "cut_overview": "[12]*400+[4]*600",
    "cut_innercut": "[4]*400+[12]*600",
    "cut_ic_pow": 1,
    "cut_icgray_p": "[0.2]*400+[0]*600",
    "key_frames": true,
    "angle": "0:(0)",
    "zoom": "0: (1)",
    "translation_x": "0: (0)",
    "translation_y": "0: (0)",
    "translation_z": "0:(15)",
    "rotation_3d_x": "0: (0)",
    "rotation_3d_y": "0: (0)",
    "rotation_3d_z": "0:(-0.002)",
    "midas_depth_model": "dpt_large",
    "midas_weight": 0.3,
    "near_plane": 200,
    "far_plane": 10000,
    "fov": 120,
    "padding_mode": "border",
    "sampling_mode": "bicubic",
    "video_init_path": "/content/training.mp4",
    "extract_nth_frame": 2,
    "video_init_seed_continuity": true,
    "turbo_mode": true,
    "turbo_steps": "2",
    "turbo_preroll": 10
}

midas_utils module not found

When I tried to run disco.py I ran into the following issue midas_utils module not found

As a solution to this problem, I cloned MiDaS and put it within disco-diffusion. In the disco_xform_utils.py script, I changed import midas_utils to import MiDaS.utils as midas_utils.

Is it a good solution to the issue or should I pip import a particular package ?

Unable to resume runs with no error

Attempting to resume a run after the Colab instance restarts or gets terminated and recreated results in nothing happening.

image

I don't know what's causing this, and there's no error visible. Just let me know if you need any information or logs.

Is it possible to generate using CPU ?

hello.

I had some trouble with my graphics card though.
when i switch the device to CPU

HBox(children=(FloatProgress(value=0.0, description='Batches', max=50.0, style=ProgressStyle(description_width='initial')), HTML(value='')))

Output()
0%| | 0/240 [00:00<?, ?it/s]
Seed used: 896041566
Traceback (most recent call last):
File "disco.py", line 2533, in
do_run()
File "disco.py", line 1338, in do_run
for j, sample in enumerate(samples):
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 900, in ddim_sample_loop_progressive
eta=eta,
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 671, in ddim_sample
model_kwargs=model_kwargs,
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\respace.py", line 91, in p_mean_variance
return super().p_mean_variance(self._wrap_model(model), *args, **kwargs)
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\gaussian_diffusion.py", line 260, in p_mean_variance
model_output = model(x, self._scale_timesteps(t), **model_kwargs)
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\respace.py", line 128, in call
return self.model(x, new_ts, **kwargs)
File "D:\ProgramData\Anaconda3\envs\tf\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\unet.py", line 656, in forward
h = module(h, emb)
File "D:\ProgramData\Anaconda3\envs\tf\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\disco-diffusion/guided-diffusion\guided_diffusion\unet.py", line 77, in forward
x = layer(x)
File "D:\ProgramData\Anaconda3\envs\tf\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\ProgramData\Anaconda3\envs\tf\lib\site-packages\torch\nn\modules\conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\ProgramData\Anaconda3\envs\tf\lib\site-packages\torch\nn\modules\conv.py", line 443, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: "unfolded2d_copy" not implemented for 'Half'

What should I do ?
thx

What are the better settings?

Thanks for this great project, install it on local is very easy and fast.

But image generated by the V5.2 default settings is very bad (changed n_batches to 2, steps to 50), far worse than the results in: https://medium.com/@nin_artificial/dall-e-2-vs-disco-diffusion-c6de6bfbacf9,
and the generating speed is slow, with the same size and steps like: https://huggingface.co/spaces/multimodalart/latentdiffusion,
speed is slower then latentdiffusion, and quality is worse.

do i have to use n_batches=50, steps=250? but it will be too slow on 1080Ti, even n_batches=2 steps=50 is slower than latentdiffusion.
So What are the better settings?

Updated:
I have tried n_batches=2, steps=250, but the result is just little improved, don't know why? Maybe it is my prompts has problem,
but with latentdiffusion, most prompts will generate good image with just 50 steps.

Lost the ability to resume an animation in 5.2 -- works in 5.1

Im not sure what update broke this functionality but there has been a regression from disco diffusion 5.1- I seem to have lost the ability to resume a batch from a fresh instance, either via colab or running locally... if I try to pick up a batch from a previous frame I get the error file not found prevFrame.png, it doesn't really make sense because the previous frame in this case should be the init frame I've selected and as the instance has never run before, there will be no artifacts therefore no previous frame. This used to work fine for me, but now animation resuming doesn't work at all unless I'm in the same session and that is a bit of a showstopper considering timeouts/ I wonder if this is related to @aletts recents updates.

How large is the project?

last night,I installed docker-for-windows,then changed directory to [docker/prep],executing [docker build -t disco-diffusion-prep:5.1 .] command.At first,it was running succesffully,I was so glad.One hour later,I saw the progressbar stucked at 3/19,only time growing and data remianing still.Two hours later,it was the same.And I noticed that it took me nearly 15 GB on C: disk building the project which failed to build.

An easy fix... init_scale vs args.init_scale

if init is not None and args.init_scale:
init_losses = lpips_model(x_in, init)
loss = loss + init_losses.sum() * args.init_scale

instead should be

if init is not None and init_scale:
init_losses = lpips_model(x_in, init)
loss = loss + init_losses.sum() * init_scale

because earlier we might have set init_scale to be equal to frame_scale (if we are doing animation)

otherwise, init_scale is just a dead end variable

Re-Enable ECC on NVIDIA consumer GPU

Personally don't know enough about code to make this thing work. I know the heavy lifting is all done as is, but it's still too many steps popping out too many errors for me with no real clear cut solution to the most recent issue I had. Anyway now I'm left with my ECC disabled and NVIDIA's website only gives instructions on how to re-enable if you're using a workstation grade system. Would rather not leave this open.

First Run Error: No module named 'in_path'

All the dependencies installed successfully. I can run each cell up to diffuse, which fails with

ModuleNotFoundError                       Traceback (most recent call last)
[/tmp/ipykernel_8090/3795853057.py](https://localhost:8080/#) in <module>
      1 import PIL
      2 import glob
----> 3 import in_path
      4 
      5 

ModuleNotFoundError: No module named 'in_path'

I also had to add import glob only to fail on in_path. I did run the animation settings cell prior to this.

Is this common? I cloned from main yesterday.

Running LWS Ubuntu, followed the microsoft docs for setting up a local jupyter for use with Disco.

Any help is MUCH APPRECIATED!

I getting an all black image when connecting the local runtime

{
    "text_prompts": {
        "0": [
            "A beautiful painting of a singular lighthouse, shining its light across a tumultuous sea of blood by greg rutkowski and thomas kinkade, Trending on artstation.",
            "yellow color scheme"
        ],
        "100": [
            "This set of prompts start at frame 100",
            "This prompt has weight five:5"
        ]
    },
    "image_prompts": {},
    "clip_guidance_scale": 5000,
    "tv_scale": 0,
    "range_scale": 150,
    "sat_scale": 0,
    "cutn_batches": 4,
    "max_frames": 10000,
    "interp_spline": "Linear",
    "init_image": null,
    "init_scale": 1000,
    "skip_steps": 0,
    "frames_scale": 1500,
    "frames_skip_steps": "60%",
    "perlin_init": false,
    "perlin_mode": "mixed",
    "skip_augs": false,
    "randomize_class": true,
    "clip_denoised": false,
    "clamp_grad": true,
    "clamp_max": 0.05,
    "seed": 1748492762,
    "fuzzy_prompt": false,
    "rand_mag": 0.05,
    "eta": 0.8,
    "width": 256,
    "height": 256,
    "diffusion_model": "512x512_diffusion_uncond_finetune_008100",
    "use_secondary_model": true,
    "steps": 250,
    "diffusion_steps": 1000,
    "diffusion_sampling_mode": "ddim",
    "ViTB32": true,
    "ViTB16": true,
    "ViTL14": false,
    "RN101": false,
    "RN50": true,
    "RN50x4": false,
    "RN50x16": false,
    "RN50x64": false,
    "cut_overview": "[12]*400+[4]*600",
    "cut_innercut": "[4]*400+[12]*600",
    "cut_ic_pow": 1,
    "cut_icgray_p": "[0.2]*400+[0]*600",
    "key_frames": true,
    "angle": "0:(0)",
    "zoom": "0: (1), 10: (1.05)",
    "translation_x": "0: (0)",
    "translation_y": "0: (0)",
    "translation_z": "0: (10.0)",
    "rotation_3d_x": "0: (0)",
    "rotation_3d_y": "0: (0)",
    "rotation_3d_z": "0: (0)",
    "midas_depth_model": "dpt_large",
    "midas_weight": 0.3,
    "near_plane": 200,
    "far_plane": 10000,
    "fov": 40,
    "padding_mode": "border",
    "sampling_mode": "bicubic",
    "video_init_path": "training.mp4",
    "extract_nth_frame": 2,
    "video_init_seed_continuity": true,
    "turbo_mode": false,
    "turbo_steps": "3",
    "turbo_preroll": 10
}

Here is my message

ImportError: cannot import name 'InferenceHelper' from 'infer' (/home/ubuntu/.local/lib/python3.8/site-packages/infer/__init__.py)

Hello, I attempted to install it (Disco_Diffusion_v5_4_[Now_with_Warp].ipynb) on Ubuntu 20.04 LTS, with Jupyter Lab, however received the following issues. Is there an equivalent solution? thanks

disco_xform_utils.py failed to import InferenceHelper. Please ensure that AdaBins directory is in the path (i.e. via sys.path.append('./AdaBins') or other means).


ImportError                               Traceback (most recent call last)
Input In [56], in <cell line: 134>()
    142         wget("https://cloudflare-ipfs.com/ipfs/Qmd2mMnDLWePKmgfS8m6ntAg4nhV5VkUyAydYBp8cWWeB7/AdaBins_nyu.pt", f'{PROJECT_DIR}/pretrained')
    143     sys.path.append(f'{PROJECT_DIR}/AdaBins')
--> 144 from infer import InferenceHelper
    145 print(sys.path)
    146 MAX_ADABINS_AREA = 500000

ImportError: cannot import name 'InferenceHelper' from 'infer' (/home/ubuntu/.local/lib/python3.8/site-packages/infer/__init__.py)

Cannot Use New OpenCLIP LAION Models

So far unable to use the new LAION CLIP models.

Disco crashes on step two with this error:

AttributeError                            Traceback (most recent call last)
Input In [19], in <cell line: 177>()
    175 if RN50x64: clip_models.append(clip.load('RN50x64', jit=False)[0].eval().requires_grad_(False).to(device))
    176 if RN101: clip_models.append(clip.load('RN101', jit=False)[0].eval().requires_grad_(False).to(device))
--> 177 if ViTB32_laion2b_e16: clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion2b_e16').eval().requires_grad_(False).to(device))
    178 if ViTB32_laion400m_e31: clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e31').eval().requires_grad_(False).to(device))
    179 if ViTB32_laion400m_32: clip_models.append(open_clip.create_model('ViT-B-32', pretrained='laion400m_e32').eval().requires_grad_(False).to(device))

**AttributeError: module 'open_clip' has no attribute 'create_model'**

Running Ubuntu 20.04, rebuilt by conda environment, and re-cloned everything.

Thanks for the fascinating software!

Streamline additional batch runs with settings changes

Since creating art with DD is an interactive, iterative, and time intensive process, it would be great to be able to automatically re-run the notebook with new changes after the current image is complete. Currently, that interactive process for batch runs is to wait until the image is complete and starts a new image in the batch, go to Runtime > Interrupt execution, make setup changes if you haven't already, scroll up above those changes and select a cell, and go to Runtime > Run after.

This could (possibly) be streamlined for multi-batch use with a checkbox in the "Do the Run!" section to restart_batch_with_updated_settings_when_complete. The workflow, while the batch is running, could be to make settings changes above and then check the checkbox. When the current batch image is complete, it could automatically restart the run at the "3. Settings" section to cover all changes before "4. Diffuse". Ideally the new checkbox would be programmatically unchecked so that the full n_batches could complete if the user is satisfied.

This way you would not need to closely monitor its progress—you could check on it, make small changes, and leave it.

Re-draw the run's progress bar more often

Sometimes the progress bar is not drawn when a session is disconnected for an unknown reason. I use Colab Pro+, so it is computing in the background and eventually reconnects, but the progress bar is removed. I have checked that it is not hidden under a scrollbar in those cases.

Could the progress bar elements be (re)drawn/replaced every time the display_rate updates the image? That may make the disappearing progress bar less of an issue.

<div class="display_data output-id-7">: image

File not found error

I have configured everything as in the previous version 5.4 but it does not work for me. Can anyone help me here? Thanks.

image

init_image not working

After setting an init_image I'm getting the following error. The image does exist inside the init_images folder in google drive.
imagen

FileNotFoundError: [Errno 2] No such file or directory: 'darth.png'

Howdy, just noticed a type0 in the main description

"v3 Update: Dec 24th 2021 - Somnai"
...
"v4 Update: Jan 2021 - Somnai" [<-- I'm wagering this should be 2022]
...
"v4.1 Update: Jan 14th 2021 - Somnai" [<-- Same for this line]
...
"v5 Update: Feb 20th 2022 - gandamu / Adam Letts"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.