Code Monkey home page Code Monkey logo

Comments (23)

GuiyeC avatar GuiyeC commented on July 30, 2024 6

Hello everyone! I have created a small app Guernika Model Converter to convert models into CoreML compatible models from existing local models and CKPT files.

I have been able to convert DreamBooth models and HuggingFace models that were giving me problems when loading from the identifier.

This app is essentially a pyinstaller wrapper of modified scripts with a tkinter UI. You can also check the modified scripts on the scripts.zip on the same repo.

from ml-stable-diffusion.

GuiyeC avatar GuiyeC commented on July 30, 2024 4

@radfaraf and I were able to find the problem, it looks like Xcode is mandatory in order to convert models, once installed he was able to successfully convert both .ckpt files. Thanks again for the help debugging this 🙏

from ml-stable-diffusion.

godly-devotion avatar godly-devotion commented on July 30, 2024 3

I've been working on converting ckpt files to Core ML as well. I found a process that seems to work but needs more testing.
https://github.com/godly-devotion/mochi-diffusion/wiki/How-to-convert-ckpt-files-to-Core-ML

from ml-stable-diffusion.

JamieMartin avatar JamieMartin commented on July 30, 2024 1

@alelordelo If you want to convert a model to mps from a .ckpt file, you can do so following these steps:

Step One:
First prepare to send the whole model (not just .ckpt). You can do this using a conversion script like the one in diffusers - diffusers convert .ckpt to stable diffusion model.

Example usage:
python3 -m convert_original_stable_diffusion_to_diffusers --checkpoint_path '/Users/username/folder/model.ckpt' --original_config_file '/Users/username/folder/v1-inference.yaml' --dump_path '/Users/username/output_path/models'

Ideally, you will want to use the same inference.yaml as the one you used to train the model. If in doubt, you can find inference.yaml files for v1 and v2 models here respectively:
v1 - for v1.x models
v2 - for v2.x models

Step Two
Next you can reference the output of that command (e.g. /Users/username/output_path/models), and convert this to coreml model using torch2coreml.

python3 -m python_coreml_stable_diffusion.torch2coreml --model-version '/Users/username/output_path/models'\ --convert-unet --convert-text-encoder --convert-vae-decoder --chunk-unet --convert-safety-checker -o ./models

Inference
Now you can use this new custom mps model created from your .ckpt for inference. Here's an example below:
python3 -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ‘./models’ -o ‘./outputs’ --compute-unit ALL --seed 93 --model-version '/Users/username/output_path/models'
Note: you'll still need to reference the old unconverted model reference, because the pipeline script still uses this for metadata.

from ml-stable-diffusion.

tomy128 avatar tomy128 commented on July 30, 2024 1

Hi @tomy128 I'm not familiar with the ml-stable-diffusion project but if you are wondering how the HF cache is working, here is a piece of documentation you can read: https://huggingface.co/docs/huggingface_hub/guides/manage-cache. If you want to properly download models to the cache, you can use the huggingface-cli download command. Hope this will prove useful 🤗

(disclaimer: I'm a maintainer of huggingface_hub)

Thanks for your reply. I just read the documentation you provider, it makes me know more about HF cache stuff. But it doesn't answer my questions, maybe I need go deeper into the ml-stable-diffusion code. tks again! :-)

from ml-stable-diffusion.

brandonkoch3 avatar brandonkoch3 commented on July 30, 2024

I was able to use a local model that I cloned from HuggingFace by feeding the local URL to the model folder in place of the --model-version argument. For example, I used the command;

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker -o sd2CoremlChunked --model-version /Users/me/Desktop/stable-diffusion-2 --bundle-resources-for-swift-cli --chunk-unet --attention-implementation SPLIT_EINSUM

It seems like the converter is looking for a .bin file, which I found in the vae folder when cloning from HuggingFace. Not sure if this answers your question, but it helped me.

from ml-stable-diffusion.

jeet-dhandha avatar jeet-dhandha commented on July 30, 2024

@atiorh Please update this whenever possible. Thank you..

from ml-stable-diffusion.

alelordelo avatar alelordelo commented on July 30, 2024

Has anyone managed to convert a model from .ckpt file?

I tried this as suggested by @brandonkoch3

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version /Users/Downloads/Illustrators2.ckpt -o /Applications/Diffusion\ Coreml

But got this error:
OSError: It looks like the config file at '/Users//Downloads/Illustrators2.ckpt' is not a valid JSON file.

from ml-stable-diffusion.

pedx78 avatar pedx78 commented on July 30, 2024

alelordelo, it should be link to a folder, not a file
/Users/Downloads/model_folder

from ml-stable-diffusion.

radfaraf avatar radfaraf commented on July 30, 2024

Local doesn't seem to work for me -- wondering maybe miniconda3 is not supported and need the larger package? Also seems to except to find a model_index.json file when I just have models in .ckpt and .safetensors format in my Stable-diffusion/convert folder.

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version /Volumes/2TB\ SSD/Stable-diffusion/convert -o /Users/robertw/ml-stable-diffusion

INFO:__main__:Initializing StableDiffusionPipeline with /Volumes/2TB SSD/Stable-diffusion/convert..
Traceback (most recent call last):
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/robertw/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 926, in <module>
    main(args)
  File "/Users/robertw/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 795, in main
    pipe = StableDiffusionPipeline.from_pretrained(args.model_version,
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/diffusers/pipeline_utils.py", line 515, in from_pretrained
    config_dict = cls.load_config(cached_folder)
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 320, in load_config
    raise EnvironmentError(
OSError: Error no file named model_index.json found in directory /Volumes/2TB SSD/Stable-diffusion/convert.

from ml-stable-diffusion.

brandonkoch3 avatar brandonkoch3 commented on July 30, 2024

Local doesn't seem to work for me -- wondering maybe miniconda3 is not supported and need the larger package? Also seems to except to find a model_index.json file when I just have models in .ckpt and .safetensors format in my Stable-diffusion/convert folder.

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version /Volumes/2TB\ SSD/Stable-diffusion/convert -o /Users/robertw/ml-stable-diffusion

INFO:__main__:Initializing StableDiffusionPipeline with /Volumes/2TB SSD/Stable-diffusion/convert..
Traceback (most recent call last):
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/robertw/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 926, in <module>
    main(args)
  File "/Users/robertw/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 795, in main
    pipe = StableDiffusionPipeline.from_pretrained(args.model_version,
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/diffusers/pipeline_utils.py", line 515, in from_pretrained
    config_dict = cls.load_config(cached_folder)
  File "/Users/robertw/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 320, in load_config
    raise EnvironmentError(
OSError: Error no file named model_index.json found in directory /Volumes/2TB SSD/Stable-diffusion/convert.

Where did your model come from, @radfaraf? To your point, it definitely seems like the CoreML tools are looking for a folder structure largely reminiscent of what you'd see on the official repos for Stable Diffusion, like the Stable Diffusion 2.1 main repo. That includes a model_index.json (which, if you look at the model_index.json file for Stable Diffusion 2.1, it's really just a JSON file with some detail about the underlying structure of the model).

I'm still trying to piece this together myself, but wondering if your .ckpt is something you fine-tuned yourself, or where you got it from? Most likely, the models we have that are just .ckpt files are based directly on "official" models that came from StabilityAI, so I wonder if we can piece together the folder structure from official models and get this running (I tried this on a model I fine-tuned using Dreambooth, and almost got it to work, but ran into a conversion error when using the CoreML convert tools related to what I think are floating point value errors, so I'm trying to re-train using FP32 vs. FP16 and seeing if that works).

I'll keep you updated, but curious if you can share more on the model you're trying to convert.

from ml-stable-diffusion.

radfaraf avatar radfaraf commented on July 30, 2024

Here are the ones I tried: https://civitai.com/models/1254/elldreths-dream-mix
and https://civitai.com/models/1259/elldreths-og-4060-mix

from ml-stable-diffusion.

radfaraf avatar radfaraf commented on July 30, 2024

I found someone had a model_index.json file for one of those here: https://huggingface.co/johnslegers/elldrethsdream/tree/main

Put it in the same folder as the model. Now it just wants another json file:
OSError: Error no file named scheduler_config.json found in directory /Volumes/2TB SSD/Stable-diffusion/convert.

from ml-stable-diffusion.

alelordelo avatar alelordelo commented on July 30, 2024

alelordelo, it should be link to a folder, not a file
/Users/Downloads/model_folder

thanks @pedx78 , but why I have a custom .ckpt that I trained on Dreambooth Google Colab for ex?
Should I donwload a Stable Diffusion model and replace the original .ckpt for the custom Dreambooth .ckpt?

from ml-stable-diffusion.

brandonkoch3 avatar brandonkoch3 commented on July 30, 2024

from ml-stable-diffusion.

brandonkoch3 avatar brandonkoch3 commented on July 30, 2024

I've been working on converting ckpt files to Core ML as well. I found a process that "sort of" works but the generated images are still funky.
https://github.com/godly-devotion/mochi-diffusion/wiki/How-to-convert-ckpt-files-to-Core-ML

Great work! This seems to work really well to create the folder structure that CoreML needs for a .ckpt to be able to be converted. I'm able to do the conversion successfully, which is awesome. That said, when I take these steps and try to use the StableDiffusion package, I end up with a completely black image. I assume this is the funkiness you're seeing with the generated image?

from ml-stable-diffusion.

godly-devotion avatar godly-devotion commented on July 30, 2024

I've been working on converting ckpt files to Core ML as well. I found a process that "sort of" works but the generated images are still funky.
https://github.com/godly-devotion/mochi-diffusion/wiki/How-to-convert-ckpt-files-to-Core-ML

Great work! This seems to work really well to create the folder structure that CoreML needs for a .ckpt to be able to be converted. I'm able to do the conversion successfully, which is awesome. That said, when I take these steps and try to use the StableDiffusion package, I end up with a completely black image. I assume this is the funkiness you're seeing with the generated image?

I'm able to generate the images and they look similar to how it should look like but the image quality is poor. Perhaps using a different scheduler such as Euler might help.
Have you tried turning off the safety checker? I was able to generate images using Mochi Diffusion which has safety checker disabled.

from ml-stable-diffusion.

pedx78 avatar pedx78 commented on July 30, 2024

Using this script - https://github.com/Sunbread/Ckpt2Diff
I was able to convert my .ckpt to diffusers (Like displayed on HF site).
Then I succesfully converted diffusers to coreml format.

However image output was blurry, discolored.

I dont know much about the .ckpt file structure and tooling, very new to this.

But I found out that any embeddings or manipulations added to model changes the tensor structure. -
huggingface/diffusers#672 (comment)

I am thinking of this flow

  • use a tool to discover the models structure
  • use a tool to resize model to a format friendly to coreml converter
  • convert model to diffusers
  • convert diffusers to coreml format

@brandonkoch3 Your Dreambooth flow looks interesting. May give it a try on weekend.

from ml-stable-diffusion.

alelordelo avatar alelordelo commented on July 30, 2024

@pedx78 @brandonkoch3 ,

what if you use the workflow suggested by Apple, and replace the stable diffusion folder model with the custom .ckpt file?

from ml-stable-diffusion.

brandonkoch3 avatar brandonkoch3 commented on July 30, 2024

@alelordelo If you want to convert a model to mps from a .ckpt file, you can do so following these steps:

Step One: First prepare to send the whole model (not just .ckpt). You can do this using a conversion script like the one in diffusers - diffusers convert .ckpt to stable diffusion model.

Example usage: python3 -m convert_original_stable_diffusion_to_diffusers --checkpoint_path '/Users/username/folder/model.ckpt' --original_config_file '/Users/username/folder/v1-inference.yaml' --dump_path '/Users/username/output_path/models'

Ideally, you will want to use the same inference.yaml as the one you used to train the model. If in doubt, you can find inference.yaml files for v1 and v2 models here respectively: v1 - for v1.x models v2 - for v2.x models

Step Two Next you can reference the output of that command (e.g. /Users/username/output_path/models), and convert this to coreml model using torch2coreml.

python3 -m python_coreml_stable_diffusion.torch2coreml --model-version '/Users/username/output_path/models'\ --convert-unet --convert-text-encoder --convert-vae-decoder --chunk-unet --convert-safety-checker -o ./models

Inference Now you can use this new custom mps model created from your .ckpt for inference. Here's an example below: python3 -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i ‘./models’ -o ‘./outputs’ --compute-unit ALL --seed 93 --model-version '/Users/username/output_path/models' Note: you'll still need to reference the old unconverted model reference, because the pipeline script still uses this for metadata.

Thanks for this great write-up. This is the first I've seen showing the steps going from something that might have been fine-tuned on Dreambooth to usable directly in Python via CoreML. I took these steps (the only modification I made was having to use the --compute-unit CPU_AND_GPU or an error was thrown on my machine if I used ALL), which seemed to complete successfully via Terminal, but the output of the image was completely black.

This is the same issue I'm having been when converting a Dreambooth-trained model, or a downloaded .ckpt (and then converted to CoreML using the steps outlined @godly-devotion above) to CoreML, where the model does load, but results in generating abnormal/blank images.

from ml-stable-diffusion.

radfaraf avatar radfaraf commented on July 30, 2024

I tried the converter on two different .ckpt files I used that work fine in other software that I found online and it gives an error about 20 minutes in. I posted more info here: https://huggingface.co/Guernika/CoreMLStableDiffusion/discussions/2

from ml-stable-diffusion.

tomy128 avatar tomy128 commented on July 30, 2024

same occurs to me!
after doing python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert- safety-checker -o ./models , I found it auto download models from hugging face and put them into ~/.cache/huggingface/hub/{model_name}.
And there are three directory in it, blobs , refs and snapshots.

Then when i do python -m python_coreml_stable_diffusion.pipeline --prompt xxxx -i models -o output, it starts download things of the model unexpectedly.

While i try python -m python_coreml_stable_diffusion.pipeline --model-version /Users/xxx/.cache/huggingface/hub/{model_name} --prompt xxxx -i models -o output to load from local, it throw errors like: OSError: Error no file named model_index.json in xxx, then I change the --model-version to the snapshots directory in ~/.cache/huggingface/hub/{model_name} it says OSError: Error no file named diffusion_pytorch_model.bin in xxx.

This is strange! after some research, I found all files download from huggingface is stored in blobs directory and name with hash number? and files in snapshot are just symbles of them. And then don't contains any *.bin files.

Here are my questions:

  1. how to avoid download twice ?
  2. how to load from local when do the inference ?

from ml-stable-diffusion.

Wauplin avatar Wauplin commented on July 30, 2024

Hi @tomy128 I'm not familiar with the ml-stable-diffusion project but if you are wondering how the HF cache is working, here is a piece of documentation you can read: https://huggingface.co/docs/huggingface_hub/guides/manage-cache. If you want to properly download models to the cache, you can use the huggingface-cli download command. Hope this will prove useful 🤗

(disclaimer: I'm a maintainer of huggingface_hub)

from ml-stable-diffusion.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.