Importing Stable Diffusion Models

Learn how to import stable diffusion models into Sogni

How To Import

To import any Stable Diffusion model that has already been converted to Core ML, whether downloaded from the internet or trained by you, go to File > Import Stable Diffusion (Core ML) Model

Settings for Stable Diffusion Model

When importing your model into Sogni you'll need to assign settings for your model.

Model Name: This is the name that will be displayed on the Stable Diffusion Model list, which is located in the Advanced module at the bottom of the Controls Bar. ‍Output Size: Select the correct output size of the model. Currently, the following output sizes are supported: 512x512, 786x786, XL 1024x1024. ‍Attention: The attention version of the model dictates the compatible processing units. Ensure you select the appropriate version. If the model you're importing is available in both the Original and Split-Einsum versions, it's advisable to import both. Use the Original version with 'CPU, GPU' processing. ‍Quantized: Enable this option if the model has been quantized (8-bit, 6-bit, etc.). ‍Model Folder: Select the location of the model's uncompressed folder. A copy of the folder's contents will be imported. You can discard the downloaded model file after importing.


How to Delete Imported Model

To delete a previously imported model, locate it in the Model Explorer, then right-click or command-click and select "Delete Model”


How to Convert Stable Diffusion Model SafeTensors or CKPT Files to Core ML

Sogni utilizes Apple's Core ML to efficiently run stable diffusion models. To obtain the necessary bundle of compiled .mlmodelc and other files, the original Stable Diffusion model (CKPT or SafeTensors) must first be converted to Diffusers. Subsequently, the Diffusers format is transformed into .mlpackage files, from which the final compiled bundle is created.

This guide is broken into the following sections :

  • Requirements

  • Conversion: SafeTensors / CKPT → Diffusers

  • Conversion: Diffusers → CoreML

  • Import CoreML model into Sogni

  • Troubleshooting

Open Terminal and run all of the following commands :

Requirements

  1. Install Homebrew and follow the "Next steps" instructions in Terminal

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  1. Install Wget

brew install wget

3. Install Xcode

  1. Select Xcode as the active Command Line Tools provider.

sudo xcode-select -s /Applications/Xcode.app

5. Install Miniconda

  1. Clone Apple's ml-stable-diffusion repository to your Mac:

git clone https://github.com/apple/ml-stable-diffusion.git
  1. Start a new Python environment and install other requirements:

conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
cd ml-stable-diffusion
pip install -e .
pip install omegaconf
pip install safetensors
  1. Create a folder called "models" on your Desktop. Place the model's .safetensors or .ckpt file and this Python script inside the new folder.

  2. Run the following commands to ensure you have the correct versions for these items:

pip install torch==2.1.0
pip install scikit-learn==1.1.2
pip install diffusers==0.23.1
pip install huggingface-hub==0.19.4
pip install transformers==4.34.1
  1. Your system should now be ready to convert Stable Diffusion models!


Conversion: SafeTensors / CKPT → Diffusers

  1. Activate the Conda environment.

conda activate coreml_stable_diffusion

2. Navigate to the "models" folder on your Desktop

cd ../desktop/models

3. Run conversion script. Replace <MODEL-NAME> with your model's file name. SafeTensors:

python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.safetensors --from_safetensors --device cpu --extract_ema --dump_path <MODEL-NAME>_diffusers

CKPT:

python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path <MODEL-NAME>.ckpt --device cpu --extract_ema --dump_path <MODEL-NAME>_diffusers

For SDXL models, add the following flag to the conversion command you are using: --pipeline_class_name StableDiffusionXLPipeline (This process takes about 1 minute.)


Conversion: Diffusers → Core ML

Run conversion script. Replace with your model's file name.

  1. ORIGINAL attention-implementation (compatible with CPU, GPU only)

python -m python_coreml_stable_diffusion.torch2coreml --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original && python -m python_coreml_stable_diffusion.torch2coreml --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original
  1. SPLIT_EINSUM attention-implementation (compatible with all processing units)

python -m python_coreml_stable_diffusion.torch2coreml --compute-unit ALL --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation SPLIT_EINSUM -o split-einsum/<MODEL-NAME>_split-einsum && python -m python_coreml_stable_diffusion.torch2coreml --compute-unit ALL --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation SPLIT_EINSUM -o split-einsum/<MODEL-NAME>_split-einsum

For SDXL models: 1. ORIGINAL attention-implementation (compatible with CPU, GPU only)

python -m python_coreml_stable_diffusion.torch2coreml --xl-version --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original

Weight Compression: --quantize-nbits 8 or ‍--quantize-nbits 6 Example:

python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit

Custom Size: Modifying the output image size of the model is only possible with the ORIGINAL implementation. To do so use the following flags: --latent-w <SIZE> --latent-h <SIZE> Note: Size must be a multiple of 64, specify it divided by 8 (Eg. for 768x768 use --latent-w 96 --latent-h 96) Example:

python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --latent-w 96 --latent-h 96 --compute-unit CPU_AND_GPU --convert-vae-decoder --convert-vae-encoder --convert-unet --unet-support-controlnet --convert-text-encoder --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit_768x768 && python -m python_coreml_stable_diffusion.torch2coreml --xl-version --quantize-nbits 8 --latent-w 96 --latent-h 96 --compute-unit CPU_AND_GPU --convert-unet --model-version <MODEL-NAME>_diffusers --bundle-resources-for-swift-cli --attention-implementation ORIGINAL -o original/<MODEL-NAME>_original_8bit_768x768

(This process can take 30+ minutes depending on model size and quantization settings.)


Import CoreML model into Sogni

  1. The compiled model folder will be created with the name "Resources" inside the "models" folder on your desktop, under the orginal/ or split-einsum/ folder. Give the "Resources" folder a more descriptive name.

  2. You can discard the .mlpackage files from the orginal/ or split-einsum/.

  3. You can also discard the _diffusers folder if you are not going to perform other convertions from this model.

  4. Open Sogni, and go to File > Import Stable Diffusion (CoreML) model. Select the folder with the compiled resources to import the model.

  5. Select your new model from the Stable Diffusion models list in the "Advanced" tab of the Controls-Bar.


Troubleshooting

Miniconda: This package is incompatible with this version of macOS: after the "Software Licence Agreement" step, click on "Change Install Location..." and select "Install for me only" Terminal: xcrun: error: unable to find utility "coremlcompiler", not a developer tool or in PATH: open Xcode and go to "Settings..." → "Locations" then click on the "Command Line Tools" drop-down menu and reselect the Command Line Tools version ModuleNotFoundError: No module named 'xyz': while the conda coreml_stable_diffusion environment is active. Run:

pip install xyz

zsh: killed python: your Mac has run out of memory. Close other memory-hungry applications and retry.


Need help? Ask in our Discord!

For any questions regarding the use of Sogni, tips, suggestions, bugs, sharing art, discussing AI and life, or simply hanging out... Join our Discord community! Join Sogni Discord

Last updated