Code Monkey home page Code Monkey logo

amd_webui's Introduction

amd_webui's People

Contributors

pythoninoffice avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

amd_webui's Issues

Model conversion error

I got an error while converting any model, can anyone help ?

this is what it looks like

OSError: [Errno 22] Invalid argument: 'C:\Users\USER/.cache\huggingface\diffusers\models--stabilityai--stable-diffusion-2\blobs\W/"76e821f1b6f0a9709293c3b6b51ed90980b3166b.lock'

imagem_2023-06-21_185018798

could not install some models

Hello.
I had successfully installed this environment (other AMD uis are failed, but this was the only successful one), but the models that I have used in other pc cannot be installed.
Here is the full error messages.
This is the model URL I tried to install
https://huggingface.co/WarriorMama777/OrangeMixs

Installing onnx...
2
Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 582/582 [00:00<?, ?B/s]
C:\Git\amd_webui\venv\lib\site-packages\huggingface_hub\file_download.py:127: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\magiwogg.cache\huggingface\diffusers. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
Fetching 1 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<?, ?it/s]
Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

.
Traceback (most recent call last):
File "C:\Git\amd_webui\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "C:\Git\amd_webui\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "C:\Git\amd_webui\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Git\amd_webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Git\amd_webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Git\amd_webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Git\amd_webui\amd_webui.py", line 138, in download_sd_model
convert_stable_diffusion_checkpoint_to_onnx.convert_models(model_path, str(onnx_model_dir), onnx_opset, onnx_fp16)
File "C:\Git\amd_webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Git\amd_webui\venv\src\diffusers\scripts\convert_stable_diffusion_checkpoint_to_onnx.py", line 80, in convert_models
pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
File "C:\Git\amd_webui\venv\src\diffusers\src\diffusers\pipeline_utils.py", line 705, in from_pretrained
loaded_sub_model = load_method(cached_folder, **loading_kwargs)
File "C:\Git\amd_webui\venv\lib\site-packages\transformers\modeling_utils.py", line 2012, in from_pretrained
config, model_kwargs = cls.config_class.from_pretrained(
File "C:\Git\amd_webui\venv\lib\site-packages\transformers\models\clip\configuration_clip.py", line 135, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Git\amd_webui\venv\lib\site-packages\transformers\configuration_utils.py", line 559, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
File "C:\Git\amd_webui\venv\lib\site-packages\transformers\configuration_utils.py", line 614, in _get_config_dict
resolved_config_file = cached_file(
File "C:\Git\amd_webui\venv\lib\site-packages\transformers\utils\hub.py", line 380, in cached_file
raise EnvironmentError(
OSError: C:\Users\magiwogg/.cache\huggingface\diffusers\models--WarriorMama777--OrangeMixs\snapshots\ec9df50045e9687fd7ea8116db84c4ad5c4a4358 does not appear to have a file named config.json. Checkout 'https://huggingface.co/C:\Users\magiwogg/.cache\huggingface\diffusers\models--WarriorMama777--OrangeMixs\snapshots\ec9df50045e9687fd7ea8116db84c4ad5c4a4358/None' for available files.

DML Execution Provider

"Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider."

On a RX6700xt R7 3700x Build,

what can i do?

No module named 'gradio'

First off I want to say you're a genius for creating a script that automates the installation process including virtual environment, I don't know why people don't do that more often.

However, I get this error when running the script:

venv "C:\stable diffusion\amd_webui-main\venv\Scripts\Python.exe"
You are using python version - 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)]
installing requirements
Processing c:\stable diffusion\amd_webui-main\repositories\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64.whl
Requirement already satisfied: numpy>=1.21.6 in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (1.24.1)
Requirement already satisfied: packaging in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (23.0)
Requirement already satisfied: sympy in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (1.11.1)
Requirement already satisfied: coloredlogs in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (15.0.1)
Requirement already satisfied: protobuf in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (4.21.12)
Requirement already satisfied: flatbuffers in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from ort-nightly-directml==1.13.0.dev20220908001) (23.1.4)
Requirement already satisfied: humanfriendly>=9.1 in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from coloredlogs->ort-nightly-directml==1.13.0.dev20220908001) (10.0)
Requirement already satisfied: mpmath>=0.19 in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from sympy->ort-nightly-directml==1.13.0.dev20220908001) (1.2.1)
Requirement already satisfied: pyreadline3 in c:\stable diffusion\amd_webui-main\venv\lib\site-packages (from humanfriendly>=9.1->coloredlogs->ort-nightly-directml==1.13.0.dev20220908001) (3.4.1)
ort-nightly-directml is already installed with the same version as the provided wheel. Use --force-reinstall to force an installation of the wheel.
WARNING: You are using pip version 22.0.4; however, version 22.3.1 is available.
You should consider upgrading via the 'D:\stable diffusion\amd_webui-main\venv\Scripts\python.exe -m pip install --upgrade pip' command.
Done installing
Traceback (most recent call last):
  File "C:\stable diffusion\amd_webui-main\start_app.py", line 69, in <module>
    import amd_webui
  File "C:\stable diffusion\amd_webui-main\amd_webui.py", line 1, in <module>
    import gradio as gr
ModuleNotFoundError: No module named 'gradio'

ImportError after model download

I feel like I'm tripping over the finish line here. I downloaded the model successfully and closed the terminal as instructed. When I try to reopen the batchfile to relaunch, I'm given this error:

ImportError: DLL load failed while importing _ufuncs: %1 is not a valid Win32 application.

This is all new to me, so apologies if this is basic. I am now no longer able to launch web-ui. What can I do to fix this? Thanks!

When i try to generate an image i get an error.

The error is:

onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_56' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1959)\onnxruntime_pybind11_state.pyd!00007FFBDC68C0FB: (caller: 00007FFBDCD9DDAF) Exception(3) tid(5200) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.

I own a TUF GAMING Radeonβ„’ RX 6800 XT OC video card. Do you know how to fix this?

Problems when importing different models.

Importing some models from Huggingface such as Anything V4.0 makes buggy results. In Anything V4.0's Huggingface page, there are several .ckpt files, tuned for different uses, yet we have no way of choosing which one we want to use. It would be good if we had a way to import specific .ckpt files and load them individually. Currently when importing a model I'm not sure which .ckpt files is used. This results in buggy images full of artifacts.

Perhaps it is I that is doing something wrong. Any help would be appreciated.

Path is not found even after checking the PATH box during Python installation

I downloaded Python 3.10.9 (also tried 3.10.10), checked the "Add python.exe to PATH" box, clicked "Install now" and ran the start.bat file and got the following error:

The system cannot find the path specified.
Couldn't start_app python
venv "C:\Users\User\AppData\Local\Temp\Temp2_amd_webui-main.zip\amd_webui-main\venv\Scripts\Python.exe"
The system cannot find the path specified.
Press any key to continue . . .

Cannot initialize model with low cpu memory usage

Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:

pip install accelerate

Error when generating images

Whenever I try and generate an image I get this error it says to type GetDeviceRemovedReason to figure out what to do but I can type in the CMD window. What should I do here?

`2023-01-31 17:21:35.2587704 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:39.6777490 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:41.6077952 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
2023-01-31 17:21:42.3875262 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
0%| | 0/30 [00:00<?, ?it/s]2023-01-31 17:22:39.6907470 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.

0%| | 0/30 [00:10<?, ?it/s]
Traceback (most recent call last):
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img
image = pipe(prompt,
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 274, in call
noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=text_embeddings)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call
return self.model.run(None, inputs)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_465' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(3) tid(3334) 887A0006 The GPU will not respond to more commands, most likely because of an invalid command passed by the calling application.

2023-01-31 17:22:54.1506880 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.

Traceback (most recent call last):
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\routes.py", line 292, in run_predict
output = await app.blocks.process_api(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 1007, in process_api
result = await self.call_function(fn_index, inputs, iterator, request)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\gradio\blocks.py", line 848, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\amd_webui.py", line 47, in txt2img
image = pipe(prompt,
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 235, in call
text_embeddings = self._encode_prompt(
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 150, in _encode_prompt
text_embeddings = self.text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\src\diffusers\src\diffusers\onnx_utils.py", line 61, in call
return self.model.run(None, inputs)
File "F:\AMD Stablediffusion WebUI\amd_webui-main\amd_webui-main\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running MemcpyFromHost node. Name:'Memcpy_token_0' Status Message: D:\a_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFCCBA2C16F: (caller: 00007FFCCC13DDAF) Exception(6) tid(3334) 887A0005 The GPU device instance has been suspended. Use GetDeviceRemovedReason to determine the appropriate action.`

Ui title update

the word in the title of the ui should be STABLE not SATBLE

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.