Cover image article

Running Stable Diffusion on Windows with an AMD GPU


This is a series!🔗

Part one: You're here!
Part two: Stable Diffusion Updates


(Want just the bare tl;dr bones? Go read this Gist by harishanand95. It says everything this does, but for a more experienced audience.)

Stable Diffusion has recently taken the techier (and art-techier) parts of the internet by storm. It's an open-source machine learning model capable of taking in a text prompt, and (with enough effort) generating some genuinely incredible output. See the cover image for this article? That was generated by a version of Stable Diffusion trained on lots and lots of My Little Pony art. The prompt I used for that image was kirin, pony, sumi-e, painting, traditional, ink on canvas, trending on artstation, high quality, art by sesshu.

Unfortunately, in its current state, it relies on Nvidia's CUDA framework, which means that it only works out of the box if you've got an Nvidia GPU.

Fear not, however. Because Stable Diffusion is both a) open source and b) good, it has seen an absolute flurry of activity, and some enterprising folks have done the legwork to make it usable for AMD GPUs, even for Windows users.

Requirements🔗

Before you get started, you'll need the following:

  • A reasonably powerful AMD GPU with at least 6GB of video memory. I'm using an AMD Radeon RX 5700 XT, with 8GB, which is just barely powerful enough to outdo running this on my CPU.
  • A working Python installation. You'll need at least version 3.7. v3.7, v3.8, v.39, and v3.10 should all work.
  • The fortitude to download around 6 gigabytes of machine learning model data.
  • A Hugging Face account. Go on, go sign up for one, it's free.
  • A working installation of Git, because the Hugging Face login process stores its credentials there, for some reason.

The Process🔗

I'll assume you have no, or little, experience in Python. My only assumption is that you have it installed, and that when you run python --version and pip --version from a command line, they respond appropriately.

Preparing the workspace🔗

Before you begin, create a new folder somewhere. I named mine stable-diffusion. The name doesn't matter.

Once created, open a command line in your favorite shell (I'm a PowerShell fan myself) and navigate to your new folder. We're going to create a virtual environment to install some packages into.

When there, run the following:

python -m venv ./virtualenv

This will use the venv package to create a virtual environment named virtualenv. Now, you need to activate it. Run the following:

# For PowerShell
./virtualenv/Scripts/Activate.ps1
rem For cmd.exe
virtualenv\Scripts\activate.bat

Now, anything you install via pip or run via python will only be installed or run in the context of this environment we've named virtualenv. If you want to leave it, you can just run deactivate at any time.

Okay. All set up, let's start installing the things we need.

Installing Dependencies🔗

We need a few Python packages, so we'll use pip to install them into the virtual envrionment, like so:

pip install diffusers==0.3.0
pip install transformers
pip install onnxruntime

Now, we need to go and download a build of Microsoft's DirectML Onnx runtime. Unfortunately, at the time of writing, none of their stable packages are up-to-date enough to do what we need. So instead, we need to either a) compile from source or b) use one of their precompiled nightly packages.

Because the toolchain to build the runtime is a bit more involved than this guide assumes, we'll go with option b). Head over to https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/PyPI/ort-nightly-directml/overview/1.13.0.dev20220908001 (Or, if you're the suspicious sort, you could go to https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly and grab the latest under ort-nightly-directml yourself).

Either way, download the package that corresponds to your installed Python version: ort_nightly_directml-1.13.0.dev20220913011-cp37-cp37m-win_amd64.whl for Python 3.7, ort_nightly_directml-1.13.0.dev20220913011-cp38-cp38-win_amd64.whl for Python 3.8, you get the idea.

Once it's downloaded, use pip to install it.

pip install pathToYourDownloadedFile/ort_nightly_whatever_version_you_got.whl --force-reinstall

Take note of that --force-reinstall flag! The package will override some previously-installed dependencies, but if you don't allow it to do so, things won't work further down the line. Ask me how I know >.>

Getting and Converting the Stable Diffusion Model🔗

First thing, we're going to download a little utility script that will automatically download the Stable Diffusion model, convert it to Onnx format, and put it somewhere useful. Go ahead and download https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py (i.e. copy the contents, place them into a text file, and save it as convert_stable_diffusion_checkpoint_to_onnx.py) and place it next to your virtualenv folder.

Now is when that Hugging Face account comes into play. The Stable Diffusion model is hosted here, and you need an API key to download it. Once you sign up, you can find your API key by going to the website, clicking on your profile picture at the top right -> Settings -> Access Tokens.

Once you have your token, authenticate your shell with it by running the following:

huggingface-cli.exe login

And paste in your token when prompted.

Note: If you can get an error with a stack trace that looks something like this at the bottom:

  File "C:\Python310\lib\subprocess.py", line 1438, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] The system cannot find the file specified

...then that probably means that you don't have Git installed. The huggingface-cli tool uses Git to store login credentials.

Once that's done, we can run the utility script.

python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"

--model_path is the path on Hugging Face to go and find the model. --output_path is the path on your local filesystem to place the now-Onnx'ed model into.

Sit back and relax--this is where that 6GB download comes into play. Depending on your connection speed, this may take some time.

...done? Good. Now, you should have a folder named stable_diffusion_onnx which contains an Onnx-ified version of the Stable Diffusion model.

Your folder structure should now look something like this:

A picture of Windows Explorer displaying two folders and two files. (I named my virtual environment venv instead of virtualenv. Same same though.)

Almost there.

Running Stable Diffusion🔗

Now, you just have to write a tiny bit of Python code. Let's create a new file, and call it text2img.py. Inside of it, write the following:

from diffusers import StableDiffusionOnnxPipeline
pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")

prompt = "A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd'"

image = pipe(prompt).images[0] 
image.save("output.png")

Take note of the first argument we pass to StableDiffusionOnnxPipeline.from_pretrained(). "./stable_diffusion_onnx". That's a file path to the Onnx-ified model we just created. And provider needs to be "DmlExecutionProvider" in order to actually instruct Stable Diffusion to use DirectML, instead of the CPU.

Once that's saved, you can run it with python .\text2img.py.

Once it's done, you'll have an image named output.png that's hopefully close to what you asked for in prompt!

A Stable Diffusion generated picture of a robot reclining on a mountainside.

Bells and Whistles🔗

Now, that was a little bit bare-minimum, particularly if you want to customize more than just your prompt. I've written a small script with a bit more customization, and a few notes to myself that I imagine some folks might find helpful. It looks like this:

from diffusers import StableDiffusionOnnxPipeline
import numpy as np

def get_latents_from_seed(seed: int, width: int, height:int) -> np.ndarray:
    # 1 is batch size
    latents_shape = (1, 4, height // 8, width // 8)
    # Gotta use numpy instead of torch, because torch's randn() doesn't support DML
    rng = np.random.default_rng(seed)
    image_latents = rng.standard_normal(latents_shape).astype(np.float32)
    return image_latents

pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
"""
prompt: Union[str, List[str]],
height: Optional[int] = 512,
width: Optional[int] = 512,
num_inference_steps: Optional[int] = 50,
guidance_scale: Optional[float] = 7.5, # This is also sometimes called the CFG value
eta: Optional[float] = 0.0,
latents: Optional[np.ndarray] = None,
output_type: Optional[str] = "pil",
"""

seed = 50033
# Generate our own latents so that we can provide a seed.
latents = get_latents_from_seed(seed, 512, 512)
prompt = "A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd"
image = pipe(prompt, num_inference_steps=25, guidance_scale=13, latents=latents).images[0]
image.save("output.png")

With this script, I can pass in an arbitrary seed value, easily customize the height and width, and in the triple-quote comments, I've added some notes about what arguments the pipe() function takes. My plan is to wrap all of this up into an argument parser, so that I can just pass all of these parameters into the script without having to modify the source file itself, but I'll do that later.

Some Final Notes🔗

  • As far as I can tell, this is still a fair bit slower than running things on Nvidia hardware! I don't have any hard numbers to share, only anecdotal observations that this seems to be anywhere from 3x to 8x slower than it is for people on similar-specced Nvidia hardware.
  • Currently, the Onnx pipeline doesn't support batching, so don't try to pass it multiple prompts, or it will be sad.
  • All of this is changing at breakneck pace, so I fully expect about half of this blog post to be outdated a few weeks from now. Expect to have to do some legwork of your own. Sorry!
  • There is a very good guide on how to use Stable Diffusion on Reddit that goes through the basics of what each of the parameters means, how it affects the output, and gives tips on what you can do to get better ouputs.

Closing Thoughts🔗

So hopefully, now you've got your AMD Windows machine generating some AI-powered images. As I said before, I expct much of this information to be out of date two weeks from now. I might try to keep this post updated if I find the time and inclination, but that depends a lot on how this develops, and my own free time. We'll see!

As ever, I can be found on GitHub as pingzing and Twitter as @pingzingy. Happy generating!

Creative Commons BY badge The text of this blog post is licensed under a Creative Commons Attribution 4.0 International License.

Comments

  1. Stephen
    Thu, Sep 15, 2022, 01:07:07
    Thanks for putting this together. Unfortunately I can't seem to get this to run - it seems to hang up on the pipe command. Do you have any suggestions?
    File "C:\stable-diffusion\stable-diffusion\text2img.py", line 12, in
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    RuntimeError: D:\a\_work\1\s\onnxruntime\core\providers\dml\dml_provider_factory.cc(124)\onnxruntime_pybind11_state.pyd!00007FFF3E877BF3: (caller: 00007FFF3E7C9C16) Exception(1) tid(d50) 80070057 The parameter is incorrect.
  2. MK
    Thu, Sep 15, 2022, 01:09:08
    While trying to login using the API token, I get the below error. I'm trying to understand what file is missing and why:

    Traceback (most recent call last):
    File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
    File "I:\stable-diffusion\virtualenv\Scripts\huggingface-cli.exe\__main__.py", line 7, in
    File "i:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\commands\huggingface_cli.py", line 41, in main
    service.run()
    File "i:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\commands\user.py", line 176, in run
    _login(self._api, token=token)
    File "i:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\commands\user.py", line 344, in _login
    hf_api.set_access_token(token)
    File "i:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\hf_api.py", line 705, in set_access_token
    write_to_credential_store(USERNAME_PLACEHOLDER, access_token)
    File "i:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\hf_api.py", line 528, in write_to_credential_store
    with subprocess.Popen(
    File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 858, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
    File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.8_3.8.2800.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 1311, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
    FileNotFoundError: [WinError 2] The system cannot find the file specified
    (virtualenv) PS I:\stable-diffusion>
  3. Neil
    Thu, Sep 15, 2022, 08:27:18
    MK: Judging by the stack trace, when the huggingface-cli tries to call write_to_credential_store(), it can't find ANY credential store. Since it looks like you're using a version of Python installed from the Windows Store... maybe that's why? As a guess, you could try uninstalling that version of Python, and installing a non-Store version.

    Stephen: Yours is trickier. It looks like it's actually dying somewhere in DirectML's native C/C++ code itself, with a classically unhelpful "The parameter is incorrect." =/
    Without more information about your setup, it's hard to say. I can say that I haven't seen that before, though. Are you running on an unusual system, like an ARM version of Windows, or something?
  4. ponut64
    Thu, Sep 15, 2022, 11:48:00
    I got it to work.
    Note: Probably do not use a newer nightly version of DirectML. It may cause huggingface to misbehave. Or something.
    CPU is AMD R5 5600G.
    GPU is AMD RX 6600 (non-XT).
    CPU time for a sample 256x256 image and prompt is 54 seconds.
    GPU time for the same prompt and size is 34 seconds.
    It helps!
    As for the images it's producing, they all seem rather cursed. But such is the way of AI!
  5. Conor
    Thu, Sep 15, 2022, 12:15:37
    Thanks for making the step by step.
    Though I'm afraid it still seems to be throwing an error at me and I don't understand where i'm missing a step.
    The error i get is as follows:

    PS F:\Applications\AI\Stable-Diffusion> huggingface-cli.exe login
    huggingface-cli.exe : The term 'huggingface-cli.exe' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify
    that the path is correct and try again.
    At line:1 char:1
    + huggingface-cli.exe login
    + ~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : ObjectNotFound: (huggingface-cli.exe:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

    If anyone can see what I'm doing wrong or has had the same issue I'd love to know!
  6. Neil
    Thu, Sep 15, 2022, 15:59:20
    Conor: The huggingface-cli.exe is something that gets brought in by one of the dependencies (not sure which--probably diffusers?). If you're in the virtual environment when you installed everything, it should have wound up in .\virtualenv\Scripts\.

    My guess would be that you didn't activate the virtual environment with .\virtualenv\Scripts\Activate.ps1. If you HAVE, you can probably just invoke it manually with .\virtualenv\Scripts\huggingface-cli.exe.
  7. A good man
    Thu, Sep 15, 2022, 16:24:45
    Thanks for putting this together. A few errrors and issues i had:
    -you need GIT or you won't be able to log into huggingface. You don't mention this I believe
    - '---force-reinstall' has an extra -. Needs removing or command won't be recognized
    -downloading the utility script may be confusing for newbies (was for me). Guessing what you meant by 'downloading' is selecting it all and copy-pasting into a notepad file and then changing to .py format. At least that's what worked for me
  8. Conor
    Thu, Sep 15, 2022, 18:08:57
    I realized I had a previous install of Python installed through windows store, and its directory was not added to PATH.
    Completely uninstalling Python, then reinstalling using the installer from the website, and checking the box to add to PATH during the install fixed my problem.
  9. Andrew
    Fri, Sep 16, 2022, 01:23:52
    When I run your first example python script to generate an image, it takes about 7-8 minutes. This is not as fast as I had hoped for an AMD RX 6600 considering ponut64 could do 256x256 images in less than a minute. How much is DirectML dependent on the CPU? That may be my bottleneck (Ivy Bridge processor 3570K, 16GB ram). When I modify the script to not use GPU by deleting the provider argument, it takes nearly 30 minutes to generate an image, so a relative improvement, but still long enough to try my patience.

    I cannot try generating a different image size (e.g. 256x256) since I get the following error:

    ValueError: Unexpected latents shape, got (1, 4, 32, 32), expected (1, 4, 64, 64)
  10. ponut64
    Fri, Sep 16, 2022, 02:52:10
    To andrew:
    Try this line
    image = pipe(prompt, height=320, width=320, guidance_scale=8, num_inference_steps=25).images[0]
    You can adjust the height, steps, and such from here, and many other parameters that not even I know about.
  11. Neil
    Fri, Sep 16, 2022, 07:09:17
    -A good man-
    - Huh, I didn't even realize the CLI used Git as its credential storage medium. Whack. Thanks for the heads-up.
    - Typo fixed, thanks.
    - Added a bit of clarification around downloading the script, good point.

    -Andrew-
    If you're using the second Python script in the post, like ponut64 pointed out, you need to pass height and width arguments to pipe() as well as get_latents_from_seed(). (The only thing get_latents_from_seed() does is generate the randomness the image generation process uses, which for Reasons needs to know what the dimensions of the output will be.)

    I'll probably put up a small follow-up post soon with an updated script and a few observations about the process. I've got a nicely cleaned-up script that just reads in arguments now, and is a bit neater.
  12. Rev Hellfire
    Fri, Sep 16, 2022, 13:27:56
    Thanks a million for putting this together. Worked first time for me, that's a first :-).

    One small typo though, "---force-reinstall" should be "--force-reinstall"
  13. A good man
    Fri, Sep 16, 2022, 14:13:49
    Been playing with this for a while and very often I get plain black image outputs. I read it's due to a NSFW filter. Any clue how to disable it? Tried a few tricks from different places and none of them work.
    This is very annoying as often I don't even try anything explicit nsfw. It just happens at random when u go by different art styles.
  14. Neil
    Fri, Sep 16, 2022, 14:16:37
    -Rev Hellfire-
    Thanks for the heads-up! Fixed!

    -A good man-
    Yep, that's the safety checker. I'm going to address that in the follow-up post (because for some reason, its also seems to slow things down greatly), but the easiest way to disable it is, after defining pipe, add the following line:

    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))

    That will replace pipe's safety checker with a dummy function that always returns "false" when it checks to see if the generated output would be considered NSFW.
  15. Eric
    Fri, Sep 16, 2022, 14:59:14
    Is there a way to change the sampling method? I'd like to the the ddim sampler as it seems to make good results with less steps.
  16. Alex
    Fri, Sep 16, 2022, 16:13:34
    Works for me on RX5700, thanks!
  17. Marz
    Fri, Sep 16, 2022, 17:03:37
    Hello! Thank you for this!
    I'm just having an issue I'm not quite sure how to fix.

    When running..
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"

    I get this error..
    C:\Users\tiny\AppData\Local\Programs\Python\Python310\python.exe: can't open file 'E:\\Desktop\\Stable Diffusion\\convert_stable_diffusion_checkpoint_to_onnx.py': [Errno 2] No such file or directory

    I've followed the steps word for word so I'm not too sure where I'm messing up. Any help would be great, thank you!
  18. ponut64
    Fri, Sep 16, 2022, 17:57:19
    Marz,

    You need to manually specify a directory on your system for it to put stable-diffusion.
    It must be a full directory name, for example, D:\Library\stable-diffusion\stable_diffusion_onnx
  19. Marz
    Fri, Sep 16, 2022, 18:19:03
    Hey ponut64, thanks for the reply :)

    I did do that, no matter what I get the same error. I do have it set up properly (as far I know), just didn't copy the exact prompt before. Here's what I have

    (virtualenv) PS E:\Desktop\Stable Diffusion> convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="E:\Desktop\Stable Diffusion"
    convert_stable_diffusion_checkpoint_to_onnx.py: The term 'convert_stable_diffusion_checkpoint_to_onnx.py' is not recognized as a name of a cmdlet, function, script file, or executable program.
    Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
  20. Marz
    Fri, Sep 16, 2022, 18:20:27
    UPDATE: I know that prompt is also wrong (I've been trying multiple things), I did fix it with "python" in front and it still shows the same error.
  21. Neil
    Fri, Sep 16, 2022, 18:26:06
    -Eric-
    I've played with trying to use the other schedulers, but haven't had any success yet. They usually die somewhere in the middle with arcane errors I don't know enough to debug.

    -Marz-
    It looks like the 'convert_stable_diffusion_checkpoint_to_onnx.py' script isn't in the 'E:\Desktop\Stable Diffusion' folder, judging by that error message. Try moving it into there, then running the command again?
  22. Eric
    Fri, Sep 16, 2022, 18:31:14
    How do you change the scheduler? I'd like to take a crack at figuring it out.
    For that matter, is there a complete list of arguments I can put into "pipe()" somewhere?
  23. Neil
    Fri, Sep 16, 2022, 18:36:18
    -Eric-
    The closest thing I've found to a comprehensive list of arguments taken by pipe() is the source code itself: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py#L45

    As for using a different scheduler, 'scheduler' is an arg that can be passed to .from_pretrained(), and the diffusers repo has a few examples here: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/stable_diffusion though I haven't had any luck with those, or tweaks thereof.
  24. Tony Topper
    Fri, Sep 16, 2022, 23:03:25
    Thanks for doing this. It's been interesting to play with. My first generation clocked in at around 48 seconds on a 6900xt and an AMD 3950x. I am exploring the possibility of using AI to create assets for video game productions. Speeding it up would be awesome. Any thoughts on how to achieve that?

    Also, both DALL-E 2 and Nitecafe offer generating multiple images at the same time. I would love to get this running with that feature.

    I installed the nightly via pip install using the info found here: https://aiinfra.visualstudio.com/PublicPackages/_artifacts/feed/ORT-Nightly/connect/pip Though I had to remove that config line from my pip.ini file after installing it or no other pip installs would work.

    Also, FWIW, I got an error about "memory pattern" being disabled when I run the Python script you supplied.

    Would love to keep up to date with how you improve the Python script.

    (P.S. I've also been getting black squares as the output on occasion. Wonder what that is.)
  25. Marz
    Fri, Sep 16, 2022, 23:24:05
    Thanks for the reply Neil :)
    It's most definitely there, that's why I'm stumped. Just gonna save my GPU the trouble and use the browser. Thank you though and take care!
  26. Luxion
    Sat, Sep 17, 2022, 02:43:25
    Excellent guide!
    You forgot to mention that you need to activate the environment before executing the script each time the console is closed - its obvious but maybe not for noobs.
    Now we just need the guys at diffusers to work on onnx img2img and inpainting pipelines.
  27. Eric
    Sat, Sep 17, 2022, 02:46:32
    -Tony Topper-
    The black squares output is because of the NSFW filter. From another comment above:
    "
    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))
    That will replace pipe's safety checker with a dummy function that always returns "false" when it checks to see if the generated output would be considered NSFW.
    "
    I'm also getting that memory pattern error, but it doesn't seem to affect anything?

    -Neil-
    I'm working on the scheduler problem. As near as I can figure, it's something related to an incompatibility with numpy and torch. Torch isn't getting the right type of float from numpy and when I try to cast it, it still doesn't work. That's where I'm investigating lately at least.
  28. ponut64
    Sat, Sep 17, 2022, 05:00:32
    For fixing the schedulers, I found a hint while getting help for another diffuser.
    From here:
    https://huggingface.co/hakurei/waifu-diffusion/discussions/4

    "it can be fixed by setting dtype=np.int64 in pipeline_stable_diffusion_onnx.py line 133:

    noise_pred = self.unet(
    sample=latent_model_input, timestep=np.array([t], dtype=np.int64), encoder_hidden_states=text_embeddings
    )
    "

    And specifically about the scheduler:

    "I see astype(np.int64) in scheduling_pndm.py line 168 but not in other schedulers that's why change to PNDMScheduler can fix it."

    So that guy know's what he's talking about, I don't.
  29. Simon
    Sat, Sep 17, 2022, 15:20:07
    Excellent guide! Finally got txt2img working thanks to your post. Thanks!
  30. Gianluca
    Sat, Sep 17, 2022, 19:13:24
    Thank you very much for this effective guide!
    I was able to use just the CPU so far and now I can finally try with the GPU, instead.
    It seems that the GPU is faster in my case.

    I have the following system configuration:
    - OS: Windows 10 Pro
    - MB: ASRock X570 Phantom Gaming 4
    - CPU: AMD Ryzen 7 3700X 8-Core 3600MHz
    - RAM: 32 GB (17 GB available)
    - GPU: MSI AMD Radeon RX 570 ARMOR 8 GB

    CPU average time for 512 x 512 was about 5~6 minutes.
    With GPU and the simple script above it was reduced to 4 minutes (just 1 run).
    And with your more optimized script here above my first run says 2 minutes and 10 seconds.

    With CPU it was using about 16GB of the available RAM and CPU usage was about 50%.
    With GPU it is using all of the 8GB available and 100% of GPU processing power.
  31. Dan
    Sat, Sep 17, 2022, 20:24:27
    Excellent Thank you!
    Had a bit of difficulty putting my Token on the command line. Not sure why Ctrl-V wouldn't work, but a single right click and enter worked.
  32. Luxion
    Sat, Sep 17, 2022, 23:23:28
    @Gianluca
    His second script is not 'more optimized', it simply makes some variables which are then used internally to adjust the settings. The reason you are generating twice as fast is because in his second script he specified the number of steps to 25 - which when not specified it defaults back to 50.
    I'm assuming you don't know what steps are nor their importance because you didn't worry about them at all before running the script. In which case I recommend you read/watch some SD tutorials and learn more.
  33. Gianluca
    Sun, Sep 18, 2022, 08:39:04
    Thanks for the advice! I am definitely new to Stable Diffusione and some more tutorials will help :)

    Of course you are right, I noticed the difference in steps just after I posted my comment here (but I could not edit), and I was a little bit sad to acknowledge that my graphic card is not so good at the end :)
    Basically for me it works as in your own experience: just slightly better than with CPU alone, perhaps just 0.3~0.5 s/it faster.
  34. ekkko
    Sun, Sep 18, 2022, 15:50:36
    I can't seem to paste or type in the token once the prompt appears in either shell. Ideas?
  35. Luxion
    Sun, Sep 18, 2022, 16:02:53
    Check out the diffusers repo, someone made a PR for ONNX img2img and inpaint pipelines!
    I still see some torch calls in there - not sure it really works but if it does I hope you could update this guide and teach us how to use them when they get merged.
    Thanks again for the guide! Its brilliant!


    @Gianluca
    I have the MSI RX560 4G so I know how you feel.
    But there's lots of good news:
    - GPU's pricing should drop a little in the near future
    - ONNX is still improving and its SD models and pipelines will become much better in the very near future
    - AMD is apparently working with StabilityAI to help compatibility issues
    - And finally within 2 years SD will become so optimized that it will be able to run on mobile - according to Emad - SD's creator!
  36. ekkko
    Sun, Sep 18, 2022, 16:02:55
    It finally worked via right-clicking the shell window and selecting edit>paste. Seems quite a few people have this problem - found the solution at: https://discuss.huggingface.co/t/how-to-login-to-huggingface-hub-with-access-token/22498
  37. Dan
    Sun, Sep 18, 2022, 20:44:18
    Is there a way to have the model run the same prompt more than once? I tried the technique from hugging face where it does 3 variations and puts them in a grid with a specified amount of rows and columns, not enough vram to run 3 variations at the same time. I think I'm more interested in running the process back to back for the same prompt.
  38. ponut64
    Mon, Sep 19, 2022, 00:03:10
    To Dan,

    Yes, you can easily do that with a Python "while" loop (which, here, is being used like a 'for' loop would in other languages).

    Here is an example (note the TABS!):

    num_images = 0;
    while (num_images < 10):
    num_images = num_images + 1
    image = pipe(prompt, height=448, width=320, guidance_scale=12, num_inference_steps=60).images[0]
    image.save("output" + str(num_images) + ".png")

    The count of images you want out of the program is the number that "num_images" is to be less than.
  39. Robin
    Mon, Sep 19, 2022, 11:14:28
    Thank you so much for putting this together! I was able to follow the steps and got it up and running. Is there any way I can use a GUI with it? The text prompts are working, would just be amazing to have something like stable-diffusion-ui cooperating with this method :-) Appreciate all your help! Take care and best wishes
  40. Mads
    Mon, Sep 19, 2022, 16:12:59
    Thank you so much. It is working just fine.
    A question. How do I convert custom ckpt model instead the one provided from hugging face?
  41. Bellic
    Mon, Sep 19, 2022, 18:29:54
    Wow that is great! text2img really doing well. But I wonder how do I use image to image. The default isn't onnx but some nvidia standard.
    Again. Thank you so much.
  42. Allan
    Tue, Sep 20, 2022, 00:20:49
    Thank you for providing this information! Much gratitude.
  43. Magnus
    Tue, Sep 20, 2022, 09:09:16
    Hi, great guide i got it working on my windows 10 machine. 2 questions:

    Can you do reinforcement learning on windows 10 too? referring to the pony art, was that model trained using this setup or downloaded from elsewhere? i'm guessing the ladder...

    I also notice the generation uses all of my vram (8GB) does that mean it has to use some of the regular ram and slowing it down as a process?
    i will try to generate in smaller resoultion but are there other ways of reducing vram usage?

    Thanks
  44. anonymous
    Tue, Sep 20, 2022, 09:46:29
    I'm getting images output, however with this error:
    ...virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:54: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'

    If anybody knows what this issue is, how to resolve it, or if you can tell me how I can go about debugging this, then I would be very grateful. I've been trying to print a list of available providers as my debug starting point to no avail.
  45. michael
    Tue, Sep 20, 2022, 10:44:00
    no, no it still doesnt work. need more steps.
    please explain in mroe detail the convert python part.
  46. hugh
    Tue, Sep 20, 2022, 10:44:04
    @ponut64

    Got any ideas on iterating the seed in your loop? Omitting latents=latents from pipe yields irreproducable results.

    I tried this:

    seed = 50033
    while (num_images < 25):
    num_images = num_images + 1
    seed = seed + 1
    image = pipe(prompt, height=512, width=512, guidance_scale=13, num_inference_steps=25).images[0]
    image.save("output"+ str(num_images) + "_seed-" + str(seed) + ".png")

    and received 25 different images, but running again without changes gives 25 new images.

    Replacing with:
    image = pipe(prompt, num_inference_steps=13, guidance_scale=13, latents=latents).images[0]

    gives reproducible results, but the seed is unchanging throughout.
  47. michael
    Tue, Sep 20, 2022, 10:57:45
    nevermind it works. now i just wish i can make the images bigger?
  48. Neil
    Tue, Sep 20, 2022, 11:40:04
    -Robin-
    Don't have a good out-of-the-box answer for you, but I know a few people have had some success using Gradio to throw together a rudimentary UI. Might be something you could experiment with.

    -Mads-
    Not sure! I imagine that the convert_stable_diffusion_checkpoint_to_onnx.py script has some clues. I haven't taken a close look at it myself, but you might be able to repurpose whatever it does to point at a local CKPT.

    -Magnus-
    Not sure! I'm more a casual user than an ML guru. The pony was generated using someone else's custom-built model based on Stable Diffusion that they haven't released yet, not generated locally.
    As to reducing VRAM usage, not as far as I know--I think SD uses everything that's available. There's probably some way to tune it, but I don't know what that might be.

    -anonymous-
    That indicates that SD can't find everything it needs to run DirectML, so it's falling back to executing on the CPU, and you're not getting any advantage from running on your GPU.
    When you installed the nightly Onnx Runtime package, did you make sure to pass it the --force-reinstall flag? I noticed I had similar failures until I did so.
  49. Neil
    Tue, Sep 20, 2022, 11:49:50
    -hugh-
    "...yields irreproducible results..."
    Of course, you're not actually using the seed in your example! The latents are the source of randomness in the pipeline, and if you don't pass in your own, you give the pipeline free reign to generate them for you, which it will do so randomly. If you want deterministic results, you need to generate your own latents, using a seed you control.
  50. matasoy
    Tue, Sep 20, 2022, 13:45:09
    Thank you, this helps me a lot. Especially GIT warning :D
  51. michael
    Tue, Sep 20, 2022, 14:16:38
    bro, so i have to do everythign all over if for some reason the power went off and im no longer in that shell
  52. please.
    Tue, Sep 20, 2022, 14:38:22
    + python .\text2img.py
    + ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo : NotSpecified: ( File "C:\jissis\text2img.py", line 8:String) [], RemoteException
    + FullyQualifiedErrorId : NativeCommandError

    num_images = num_images + 1
    ^
    IndentationError: expected an indented block after 'while' statement on line 7

    i want to roll on mor images. i dont know what to change anymore
  53. please, btw
    Tue, Sep 20, 2022, 14:40:16
    num_images = 0;
    while (num_images < 10):
    num_images = num_images + 1
    image = pipe(prompt, height=448, width=320, guidance_scale=12, num_inference_steps=60).images[0]
    image.save("output" + str(num_images) + ".png")

    what here needs to be changed so it generates FIVE images? ._.
  54. Luxion
    Tue, Sep 20, 2022, 15:29:18
    @please, btw
    >what here needs to be changed so it generates FIVE images? ._.
    I would advise you try to understand the code before anything else. Its fairly easy even if you know nothing of python or coding in general.
    Take a look, you first create a variable and named it "num_images" and said that it is a number and the number is 0.
    Then you created a loop that will repeat all of the code inside of it for as long as its condition is True, and the condition is "num_images < 10". Then right at the start of the loop, you have added 1 to "num_images" and you do so on every loop.
    Since "num_images" increases by 1 every time it loops and the loop itself will terminate at the end of the coding sequence when it reaches 10, guess how many times it will repeat itself and also guess what do you need to change in order to loop for a specific number of times of your choice...
    Be sure to add an extra tab on each line of code inside the loop otherwise python wont recognize that code as being inside of the loop. If necessary, check online for examples of 'while loops in python'.
  55. Josh
    Thu, Sep 22, 2022, 02:18:53
    Hey I just set this up and on the more advanced script you wrote I have been trying to change the resolution to 1024x1024 instead, but I keep getting an error for line 28 and something on line 100 of the pipeline_stable_diffusion_onnx.py where it says " stable_diffusion\pipeline_stable_diffusion_onnx.py, line 100, in __call__
    raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
    ValueError: Unexpected latents shape, got (1, 4, 128, 128), expected (1, 4, 64, 64) "
  56. Neil
    Thu, Sep 22, 2022, 09:25:51
    -Josh-
    The current state of the Onnx pipeline doesn't support sizes other than 512x512.
  57. Someone
    Fri, Sep 23, 2022, 05:30:04
    Hi Niel,
    Thanks for this wonderful tutorial. I was able to make it work but it is not using the GPU atm.
    I am getting following warning when running text2img.py:
    C:\stbdiff\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:54: UserWarning: Specified provider 'DmlExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider'

    Seems like GPUexecution is not available. Any idea how to fix this one.

    Thanks again.
  58. Gato Guff
    Sat, Sep 24, 2022, 18:59:33
    You're a genius and I love you.
  59. Leon
    Sun, Sep 25, 2022, 01:16:01
    I tried to run the "huggingface-cli.exe login" command and then when I pasted the API key it did not allow me to go trough. Then, after trying a couple of times, now it just says "Access Denied."
    I don't know what I did wrong to be honest.
  60. Chris
    Sun, Sep 25, 2022, 10:51:24
    When trying to run the script - convert_stable_diffusion_checkpoint_to_onnx.py - I get the following error :

    (virtualenv) (base) PS F:\amd-diffuser> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "F:\amd-diffuser\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    I also amended the command to the specify the output folder and it also fails :

    virtualenv) (base) PS F:\amd-diffuser> py convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="F:\amd-diffuser\stable_diffusion_onnx"
    Traceback (most recent call last):
    File "F:\amd-diffuser\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    Can anyone tell me what this error means and how to fix it?
  61. Chris
    Sun, Sep 25, 2022, 11:20:52
    Tried again after going through each step and now I'm seeing this error :

    (virtualenv) (base) PS F:\amd-diffuser> py convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "F:\amd-diffuser\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in
    import torch
    ModuleNotFoundError: No module named 'torch'
  62. Chris
    Sun, Sep 25, 2022, 13:13:16
    In case anyone has the same issue - It was simply a case of running the command - "pip install onnx"

    Thanks for creating the guide it took me awhile to get it working but I am now able to create images with my 7 year old AMD R9 390 8GB card in around 5 minutes.
  63. quickwick
    Sun, Sep 25, 2022, 21:39:03
    Thank you so much for putting this together, Neil.

    I've been playing with this for the past few days. I ended up hacking together a basic Tkinter-based GUI to make experimentation faster/easier. Hopefully other people find it useful. https://github.com/quickwick/stable-diffusion-win-amd-ui
  64. Luxion
    Sun, Sep 25, 2022, 21:52:44
    @quickwick
    Just something to keep in mind - it shouldn't take long for diffusers to add img2img and inpaint ONNX support. The PR is almost ready to be merged just needs some touches and verification. I tried it and it works.
    Somewhat unrelated to UI - one thing that I'm still trying to figure out is how to convert+load custom finetuned models such as WaifuDiffusion with ONNX as well as how to load textual inversion embeddings... Not sure its even possbile atm. If someone knows something about this please share some insights.
  65. frosty
    Mon, Sep 26, 2022, 16:27:27
    @Luxion i have managed to get a conversion of Waifu working on this branch by substituting then hugging face path to the waifu model in the step where the model is converted:
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="hakurei/waifu-diffusion" --output_path="./stable_diffusion_onnx"

    however this didn't work out of the box since I also had to make some slight tweaks inside the diffusers package to get it to run.

    \virtualenv\Lib\site-packages\diffusers\schedulers\scheduling_ddim.py line line 210:
    pred_original_sample = (torch.FloatTensor(sample) - beta_prod_t ** (0.5) * torch.FloatTensor(model_output)) / alpha_prod_t ** (0.5)

    virtualenv\Lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_onnx.py line 133:
    sample=latent_model_input, timestep=np.array([t], dtype=np.int64), encoder_hidden_states=text_embeddings


    obviously its a very hacky approach modifying the packages and I'm screwed if I want to update, but it works :-)
  66. Luxion
    Mon, Sep 26, 2022, 18:48:37
    @frosty
    Thanks!
    Yesterday I too managed to convert it (the output i used was --output_path="./waifu_diffusion_onnx") but when trying it out it was giving me weird results. First odd thing was when trying to run 10 steps (or lower) it would run 14 instead, and though I haven't tried the txt2img in-depth, the img2img (that I got from one of the diffusers PR's and is working fine for the default SD model) was totally off. The more steps I added the more noisy the image would get. I found some info in here (https://huggingface.co/hakurei/waifu-diffusion/discussions/4) that changing the scheduler in the in a json file will make it work but it didn't. I then noticed that scheduler is being used on other files as well so maybe it will work if I change it in all of them - still need to experiment with that. Hopefully I can get it to work without the need to change any package and if not, then so be it - it doesn't bother me but it isn't a good approach for the average user.
    Thank you again for your help.
  67. Mark
    Tue, Sep 27, 2022, 22:37:05
    @Luxion, "it shouldn't take long for diffusers to add img2img and inpaint ONNX support. The PR is almost ready to be merged just needs some touches and verification. I tried it and it works"

    Could you expand on this a bit? Which PR are you referring to? I'd like to give it a go myself.
  68. Luxion
    Tue, Sep 27, 2022, 23:08:06
    @Mark This one: https://github.com/huggingface/diffusers/pull/552
    Check the "files changed" - it adds 2 new .py files which are the img2img and inpaint pipelines and then changes other files as well in order to make them work. Basically you just need to go to ...\virtualenv\Lib\site-packages\diffusers\pipelines\stable_diffusion\ create those 2 files in there then manually add the respective changes in the other files.
    Now - because the pipelines are not finished yet - you have to do these following 2 steps:
    Go to https://huggingface.co/CompVis/stable-diffusion-v1-4/tree/main and download the 'vae' folder and put it inside the "stable_diffusion_onnx" folder.
    Then still inside "stable_diffusion_onnx", open up "model_index.json" and add:

    "vae": [
    "diffusers",
    "AutoencoderKL"
    ]

    Don't forget to add a comma after " ] " before " "vae", so the file should end like this:

    "vae_decoder": [
    "diffusers",
    "OnnxRuntimeModel"
    ],
    "vae": [
    "diffusers",
    "AutoencoderKL"
    ]
    }

    Basically the pipelines will be using the default SD vae instead of ONNX until someone gets it working for that.

    Finally, make a script to run them similar to the txt2img one but using the arguments required for the new ones (I've only tested img2img so far). This is the most simple img2img.py script:


    from diffusers import StableDiffusionImg2ImgOnnxPipeline
    from PIL import Image

    baseImage = Image.open(r"cube.jpg").convert("RGB") # opens an image directly from the script's location and converts to RGB color profile
    baseImage = baseImage.resize((512,512))

    prompt = "an orange cube made of oranges"
    denoiseStrength = 0.8 # a float number from 0 to 1 - decreasing this number will increase result similarity with baseImage
    steps = 20
    scale = 7.5

    pipe = StableDiffusionImg2ImgOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    image = pipe(prompt, init_image=baseImage, strength=denoiseStrength, num_inference_steps=steps, guidance_scale=scale).images[0]
    image.save("output.png")

    --------------------

    @frosty
    I managed to get waifu diffusion to work with 2 minor changes in 2 "waifu-diffusion" files before converting but using the default scheduler was absolutely horrible. Ended up doing as you said and seems to be working just fine. Only had to modify 1 line in the img2img pipeline to make it work with DDIM and that was it.
    Thank you again! I'm at this very moment trying to figure out how to load textual embeddings, if by any chance you (or anybody else) knows how to please tell me!

  69. Lino
    Thu, Sep 29, 2022, 20:10:56
    Hey hey thanks. Thank you all people that is helping us out. One question. I want EVERYTHING to be in one directory that I can pick.
  70. fuzz
    Fri, Sep 30, 2022, 01:12:18
    getting this error

    usage: convert_stable_diffusion_checkpoint_to_onnx.py [-h] --model_path MODEL_PATH --output_path OUTPUT_PATH
    [--opset OPSET]
    convert_stable_diffusion_checkpoint_to_onnx.py: error: the following arguments are required: --output_path

    no matter what i put for the output path i get this error. what am i doing wrong??
  71. frosty
    Fri, Sep 30, 2022, 11:19:23
    @Luxion yeah, im generally finding on my own build that if i have a problem its 9/10 trying to use a numpy array to access tensor methods.
    the thing im struggling with now is if/how i can get it to work with bigger or different images, anything other than 512*512 either gives me an error or garbled images
  72. FirstTimePython
    Fri, Sep 30, 2022, 12:00:57
    Thanks so much much for this! Generating images with SD has become my second hobby now! One thing I was wondering was would it be possible to use the basujindal optimized SD model with this script? I'v read it reduces the VRAM usage significantly.
  73. Marc-André
    Fri, Sep 30, 2022, 17:54:17
    Any plans for a macOS version? They also have AMD cards :)
  74. S
    Fri, Sep 30, 2022, 19:21:43
    Hey, thanks for the guide. But it seems I am stuck as I somehow cannot log in with hugging-face-cli.exe login because for some reason it won't respond to any input except for Enter.

    Token:
    Traceback (most recent call last):
    File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "C:\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
    File "C:\Python310\Scripts\huggingface-cli.exe\__main__.py", line 7, in
    File "C:\Python310\lib\site-packages\huggingface_hub\commands\huggingface_cli.py", line 45, in main
    service.run()
    File "C:\Python310\lib\site-packages\huggingface_hub\commands\user.py", line 149, in run
    _login(self._api, token=token)
    File "C:\Python310\lib\site-packages\huggingface_hub\commands\user.py", line 319, in _login
    raise ValueError("Invalid token passed!")
    ValueError: Invalid token passed!
  75. Mark
    Fri, Sep 30, 2022, 20:04:30
    @S, @anyone-else-struggling-with-token,

    Ctrl-V won't paste from your clipboard; right-click the empty space after the "Token: " prompt to paste (nothing will visibly change, but it will paste—I promise). Then press enter.

    Alternatively, modify 'user.py'
    ...\lib\site-packages\huggingface_hub\commands\user.py

    Open the script, and look for:
    token=getpass("Token:")
    _login(self._apy. token=token)

    Change to:
    token='yourtokenhere'#getpass("Token:")
    _login(self._apy. token=token)

    Save the file, run the 'huggingface-cli login' command again.
  76. FluffRat
    Sat, Oct 01, 2022, 13:52:23
    I had to add "pip install onnx" directly after "pip install ./ort_nightly_directml-1.13.0.dev20220927003-cp310-cp310-win_amd64.whl --force-reinstall" as it was throwing an error in the conversion script on the "import onnx" line
  77. Pula
    Sat, Oct 01, 2022, 19:17:08
    I ran into 2 issues using this so far. The first was using huggingface-cli login. It wont show your auth token when you input it. Just copy it from the token page and right click where it should go in PowerShell.

    The second was you need to 'pip install onnx', or you'll run into module not found when executing the 'convert_stable_diffusion_checkpoint_to_onnx.py'. I'm still working on trying to install the files, my interwebs connection apparently sucks today.
  78. MSC
    Sat, Oct 01, 2022, 21:39:33
    Not sure what I am doing wrong.

    I attempt the ort_nightly install stage and get this.

    (virtualenv) PS C:\stable-diffusion> pip install ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64 --force-reinstall
    >>
    ERROR: Could not find a version that satisfies the requirement ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64 (from versions: none)
    ERROR: No matching distribution found for ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64
  79. FluffRat
    Sun, Oct 02, 2022, 14:26:31
    @MSC Go manually download the new version from the link given in the "install dependencies" section and put it in your working folder, then make sure to add ./ to the front the filename like "pip install ./some-file-in-the-local-directory.blah"
  80. Lapo
    Sun, Oct 02, 2022, 14:43:58
    Thanks for the guide is very helpful, however I am stopped: when I run python .\text2img.py I have this error

    (virtualenv) PS D:\stable-diffusion\stable_diffusion_onnx> python .\text2img.py
    Traceback (most recent call last):
    File "D:\stable-diffusion\stable_diffusion_onnx\text2img.py", line 12, in
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    File "D:\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 288, in from_pretrained
    cached_folder = snapshot_download(
    File "D:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 98, in inner_f
    return f(*args, **kwargs)
    File "D:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\utils\_validators.py", line 92, in _inner_fn
    validate_repo_id(arg_value)
    File "D:\stable-diffusion\virtualenv\lib\site-packages\huggingface_hub\utils\_validators.py", line 142, in validate_repo_id
    raise HFValidationError(
    huggingface_hub.utils._validators.HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: './stable_diffusion_onnx'.

    Does anyone know what this might be caused by?
  81. SovietYeet
    Sun, Oct 02, 2022, 19:15:33
    Cannot paste or type my huggingface code into command prompt, any idea why? ive tried right click to paste but no window pops up

  82. Frank
    Sun, Oct 02, 2022, 20:08:21
    Can someone tell me what I did wrong here when trying to run the utility script? I wrote it as
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="C:\Users\KieraPC\Desktop\stable-diffusion\stable_diffusion_onnx"
    and the actual address of the virtual environment is located in 'C:\Users\KieraBooper\Desktop\stable-diffusion'. I also have a text document 'convert_stable_diffusion_checkpoint_to_onnx.py' with the contents of that file on githubusercontent in both the 'stable-diffusion' folder and in the virtual environment. What am I doing wrong? Please treat me like a peasant and spell it out for me
  83. Frank
    Sun, Oct 02, 2022, 20:12:11
    Edit: sorry, that should both be 'C:\Users\KieraPC\Desktop\stable-diffusion'
  84. Frank
    Sun, Oct 02, 2022, 20:26:47
    I'm making some progress. I get 'Access to model CompVis/stable-diffusion-v1-4 is restricted' even after I've entered my access token:
    'Login successful
    Your token has been saved to C:\Users\KieraPC\.huggingface\token'
    This is so hard it's mind blowing
  85. Shamu
    Sun, Oct 02, 2022, 21:23:36
    This is using my RAM, not my VRAM/GPU. Am I missing a config that doesn't exist in the code provided? Do I need to change the "provider?"
  86. A. Dumas
    Sun, Oct 02, 2022, 21:39:22
    @Frank Regarding your last message about "Access restricted": on the https://huggingface.co/CompVis/stable-diffusion-v1-4 page you will have a message with "You need to share your contact information to access this model.". You need to agree to the terms below the text (checkbox) and then click the "Access repository" button. After that you can run the script again and it should work.
  87. FluffRat
    Mon, Oct 03, 2022, 00:28:37
    @Frank Try doing this somewhere other than your desktop, for example c:\stable\ as Windows can get a little weird about permissions in user files. No idea if this has anything to do with your problem but I've seen it break things in odd ways before and it's an easy change. Second thing I would change is to specify the path during the conversion step like:
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    For reference, I ran that from powershell not cmd. I'm not sure if the slash needs to be flipped for cmd shell.
  88. Devon
    Mon, Oct 03, 2022, 09:30:51
    Great guide, thanks for putting this together for us AMD Andys!
    I just wanted to ask real quick, after inputting the script to grab stable diffusion, should the prompt window be throwing out endless Warnings going on about shape inferences and constant folding steps?
    Cheers!
  89. Cloudy
    Mon, Oct 03, 2022, 11:54:51
    Thanks for the guide, my computer now hates you! ;-)
    I've altered your script to run a loop over 200 seed numbers (50,000 - 50,199) for a successful prompt, here are my favorite results:
    https://imgur.com/a/S02vUMy/layout/grid
  90. Mike
    Mon, Oct 03, 2022, 15:49:38
    does anybody knows how to run "save_onnx.py" or "convert_stable_diffusion_checkpoint_to_onnx.py" with a already downloaded model checkpoint?
  91. Luxion
    Mon, Oct 03, 2022, 23:21:24
    @Mike
    I'm short on time so I will summarize.
    You need to find (on diffusers github), download and place the 'convert_original_stable_diffusion_to_diffusers.py' at the same location where you have the other convert script. Place the checkpoint in there as well as run the script like this:

    python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./pokemon-ema-only-epoch-000142.ckpt" --dump_path="./pokemon-ema-only-epoch-000142"

    This will convert a checkpoint to diffusers but you then need to convert it to ONNX.
    Run convert_stable_diffusion_checkpoint_to_onnx.py like so:

    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="./pokemon-ema-only-epoch-000142" --output_path="./pokemon-ema-only-epoch-000142_onnx"

    You can now delete the diffusers generated folder and just keep the onnx one.
    You probably will need to use the DDIM with that model so look up part 2 of this tutorial which explains how to do so.
    Here is how to use it in txt2img script:

    ...
    modelName = "pokemon-ema-only-epoch-000142_onnx"
    ddim = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000, clip_sample=False, set_alpha_to_one=False, tensor_format="np")
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./" + modelName, provider="DmlExecutionProvider", scheduler=ddim)
    ...
  92. Luxion
    Tue, Oct 04, 2022, 09:08:35
    @Mike
    One thing I forgot to mention, you might need to make a change in the DDIM scheduler itself perhaps even prior to model conversion. If so, just look into frosty's comments in this page - he explains how to do it.
  93. FluffRat
    Tue, Oct 04, 2022, 11:17:05
    @Luxion Oh hey, nice; I had been wondering how to do that too. Time to convert all the sketchy checkpoints and light my GPU on fire.
  94. ananab
    Wed, Oct 05, 2022, 13:55:37
    Traceback (most recent call last):
    File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
    File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\Scripts\huggingface-cli.exe\__main__.py", line 7, in
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\lib\site-packages\huggingface_hub\commands\huggingface_cli.py", line 45, in main
    service.run()
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\lib\site-packages\huggingface_hub\commands\user.py", line 149, in run
    _login(self._api, token=token)
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\lib\site-packages\huggingface_hub\commands\user.py", line 320, in _login
    hf_api.set_access_token(token)
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\lib\site-packages\huggingface_hub\hf_api.py", line 719, in set_access_token
    write_to_credential_store(USERNAME_PLACEHOLDER, access_token)
    File "C:\Programmi\AI\stable_diffusion_amd\amd_venv\lib\site-packages\huggingface_hub\hf_api.py", line 588, in write_to_credential_store
    with subprocess.Popen(
    File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 969, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
    File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1438, in _execute_child
    hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
    FileNotFoundError: [WinError 2] The system cannot find the file specified

    i get this error...i reinstalled a non-store Python and i installed GIT too....
    i tryed to edit user.py file inserting the line token = 'MY TOKEN'#getpass("Token: ")...but it continues giving me this problem....
  95. Harish Anand
    Fri, Oct 07, 2022, 18:39:02
    Thanks!
  96. Edgar
    Sat, Oct 08, 2022, 10:36:04
    Thank you, this is amazing< now how to connect automatic1111 web ui with this?
  97. Edgar
    Sat, Oct 08, 2022, 10:38:44
    for anyone stuck at HF login even with git installed here is how I passed it
    pip install huggingface_hub
    python -c "from huggingface_hub.hf_api import HfFolder; HfFolder.save_token('MY_HUGGINGFACE_TOKEN_HERE')"
  98. Ben Johnson
    Sat, Oct 08, 2022, 13:52:45
    Apologies, I am stuck and need help. I have installed Python 3.7.9 and it came with pip 20.1.1

    I am at this command, pip install ./ort_nightly_directml-1.13.0.dev20220913011-cp37-cp37m-win_amd64

    When I run it however I am met with 'invalid requirement' ? I've tried downloading different versions of the directml, and upgrade pip but none seem to work
  99. Luxion
    Sat, Oct 08, 2022, 14:58:40
    @Ben Johnson
    That version of python might work but its not the latest. You should probably be using 3.10.X just to be safe.
    As for the reason why you are failing is probably because you are using the wrong command.
    'pip install' is often used to retrieve libraries from the internet but when you specify a local file it will try to install that instead. What you are specifying is a folder which probably doesn't exist and not the file - you need to add .whl at the end like so:
    pip install ./ort_nightly_directml-1.13.0.dev20220913011-cp37-cp37m-win_amd64.whl
  100. Ornithopter
    Sat, Oct 08, 2022, 15:58:58
    Amazing article. And how can I use automatic1111's WebUI with this?
    They offered a solution on Linux with AMD GPUs, but I need it on Windows.
    So how can I get Windows + AMD GPUs + WebUI?
  101. theGuy
    Mon, Oct 10, 2022, 06:14:19
    finally some action on my gpu! this is working great. like. MUCH faster. my i5 12400 f takes between 30 mins and 45 to generate a usual batch. my rx 570 generated a single image in less than five minutes. granted it's not generating six images at the same time but it's WAY faster.
  102. Adrian
    Mon, Oct 10, 2022, 13:54:50
    I get a strange Read Time out whatever I do.
    While I'm trying to download the onnx-ified file of the model the program decides to stop and It raises a time out error.
    Does anyone know how should I fixed? When I test my internet connectionts It doens't seems to be any problem.

    requests.exceptions.ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
  103. BlueJay
    Tue, Oct 11, 2022, 01:03:56
    I've gotten to the part where it asks me for a token, but I cannot input anything (not by typing or ctrl V) it gives no error message and there's no indication that anything is wrong. This is my first time ever messing with powershell or anything of this nature. Is there something obvious I'm missing? I've tried some of the things listed in the comments above but to no avail.
  104. Adrian
    Tue, Oct 11, 2022, 19:17:47
    Yeah it's a normal error. It happenned to me too. You've to look in the error statement for a file called user.py. When you've got it, edit it with your favorite text editor (I recommend word xD) and look for something like this "token = getToken()". Replace getTocken() with "your_token". That should be it. Re-run it and voila.
  105. BlueJay
    Tue, Oct 11, 2022, 22:28:17
    Yep that worked perfectly, I ran into a different problem and now I'm stuck. At this point I'm just gonna call it a wrap and give up. I appreciate the help tho bud!
  106. 489
    Wed, Oct 12, 2022, 08:50:09
    (virtualenv) C:\SD>python convert_stable_diffusion_checkpoint_to_onnx.py --model
    _path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "convert_stable_diffusion_checkpoint_to_onnx.py", line 24, in
    from diffusers import StableDiffusionOnnxPipeline, StableDiffusionPipeline
    File "C:\SD\virtualenv\lib\site-packages\diffusers\__init__.py", line 15, in <
    module>
    from .onnx_utils import OnnxRuntimeModel
    File "C:\SD\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 31, in

    import onnxruntime as ort
    File "C:\SD\virtualenv\lib\site-packages\onnxruntime\__init__.py", line 55, in

    raise import_capi_exception
    File "C:\SD\virtualenv\lib\site-packages\onnxruntime\__init__.py", line 23, in

    from onnxruntime.capi._pybind_state import (
    File "C:\SD\virtualenv\lib\site-packages\onnxruntime\capi\_pybind_state.py", l
    ine 33, in
    from .onnxruntime_pybind11_state import * # noqa
    ImportError: DLL load failed: 指定されたモジュールが見つかりません。

    I am plagued with errors.
    I am unable to retrieve the model with the above message.
    Can someone please give me some hints?
  107. i axe u a question
    Wed, Oct 12, 2022, 13:38:13
    Is there a way to train the AI?
  108. LErikson
    Wed, Oct 12, 2022, 15:36:52
    Ahoi,
    nice work, tried different manuals to get it to work on my desktop with AMD.
    I have the feeling the art style is stucked, no change in the artstyle if I change the prompt part 'art by ...' - anyone else have that problem? (or anyone have a solution? ;))
  109. LErikson
    Wed, Oct 12, 2022, 15:37:59
    forget someting:
    nice work, tried different manuals to get it to work on my desktop with AMD. Until I found your site and it was the solution, thank you!
  110. 1312
    Wed, Oct 12, 2022, 18:20:32
    This seems interesting. Unfortunately, i can't seem to run the utility script
    (virtualenv) (base) C:\Users\1312>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    D:\miniconda3\python.exe: can't open file 'C:\Users\1312\convert_stable_diffusion_checkpoint_to_onnx.py': [Errno 2] No such file or directory
  111. 1312
    Wed, Oct 12, 2022, 18:40:09
    Well now it says
    Traceback (most recent call last):
    File "C:\Users\1312\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'
  112. 489
    Wed, Oct 12, 2022, 18:42:10
    pip install onnx
    Hi, 1312
    Try installing onnx!
  113. 1312
    Wed, Oct 12, 2022, 18:44:13
    Ok nvm now it says this

    Traceback (most recent call last):
    File "C:\Users\1312\stable-diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in
    import torch
    ModuleNotFoundError: No module named 'torch'

    Sorry, i ain't no coding master and i have no clue what i am doing :(
  114. 1312
    Wed, Oct 12, 2022, 18:51:05
    Thanks 489, i installed onnx and torch but now it says
    OSError: There was a specific connection error when trying to load CompVis/stable-diffusion-v1-4:
    (Request ID: jJWdiKecAfEcU9TsGlEU5)
  115. Luxion
    Wed, Oct 12, 2022, 23:23:26
    I've figured out how to use Textual Inversion embeddings with ONNX and wrote a guide but its too long to post in here since it includes a script so I'll post a pastebin link for those who may be interested:

    https://pastebin.com/1mF6zdvc
  116. 0Point
    Fri, Oct 14, 2022, 05:23:42
    @Luxion
    Do you have any clue about replacing the vae in the diffuser with another vae.pt file? (or somehow convert the .pt to .bin file?)
    I've tried a lot of approaches, but all end up failed. ):
  117. Luxion
    Fri, Oct 14, 2022, 15:01:55
    @0Point You can take take advantage of the conversion scripts for that. With the NAI leak and Waifu's recent version and (others in the future for sure) doing their own external VAE will become more common.
    Usually the .ckpt file has the VAE integrated within it so when you run the "convert_original_stable_diffusion_to_diffusers.py" script (which you can get in the scripts folder in the diffusers github) it will try to load the VAE from there to then convert to diffusers but since there is no VAE it will return an error. The way to solve this is to edit the script so you can point it to the external VAE instead.
    Here is a pastebin link for my edited script:
    https://pastebin.com/skwwbPpw

    You can use it by following the instructions at the top or just get some insights from it and edit your own. Don't forget you then need to convert it to ONNX with the convert to ONNX script.

  118. Chris
    Fri, Oct 14, 2022, 16:27:07
    Thank you for all the work. But my output pictures seem wrong and broken, just some random colors.
    Here is my test2img.py

    from diffusers import StableDiffusionOnnxPipeline
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))
    prompt = "A happy dog'"

    image = pipe(prompt).images[0]
    image.save("output.png")

    When I run python .\text2img.py
    I got this:
    (virtualenv) PS H:\stable-diffusion> python .\text2img.py
    2022-10-14 09:03:13.5408074 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-14 09:03:13.8936722 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-14 09:03:13.9006160 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    2022-10-14 09:03:14.9648983 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-14 09:03:15.0303245 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-14 09:03:15.0370503 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    2022-10-14 09:03:15.5524993 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-14 09:03:15.7286590 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-14 09:03:15.7357294 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    2022-10-14 09:03:16.4002876 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-14 09:03:17.9623164 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-14 09:03:17.9691546 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
    100%|██████████████████████████████████████████████████████████████████████████████████| 51/51 [03:09<00:00, 3.72s/it]

    Can you tell me what it means? And how can I fix it?
  119. Philoc
    Sun, Oct 16, 2022, 00:22:14
    Is there any way to make this compatible with other models based on SD?
  120. AdX
    Sun, Oct 16, 2022, 02:45:14
    Any signs of progress for AMD card speeds compared to similar spec NVIDIA cards?
  121. Star Wars
    Sun, Oct 16, 2022, 02:47:43
    I am just getting a black image for output.
  122. stasisfield
    Sun, Oct 16, 2022, 04:47:44
    @Luxion
    try to run that script to convert models but keep error

    Traceback (most recent call last):
    File "D:\aaa\convert_original_stable_diffusion_to_diffusers.py", line 688, in
    checkpoint = torch.load(args.checkpoint_path, map_location="cpu")["state_dict"] # Luxion edited - added map_location=torch.device("cpu") -- because on AMD gpu's loading into the GPU will probably cause errors
    KeyError: 'state_dict'
  123. Magnum
    Sun, Oct 16, 2022, 06:07:44
    Is it possible to use negative prompts with this? They are very useful for getting good results.
  124. Magnum
    Sun, Oct 16, 2022, 06:07:44
    Is it possible to use negative prompts with this? They are very useful for getting good results.
  125. Luxion
    Sun, Oct 16, 2022, 16:42:48
    @stasisfield
    Thats because you are not trying to convert a fine-tuned model using the original SD base format - for reference, all custom fine-tuned models released out there (that I've seen) such as WAIFU - all of them follow the original SD format. So it makes me wonder if what you are trying to convert is a real fine-tuned model or something else entirely.
    You can even use this script with dreambooth .ckpt's it works just fine.
    Normal .ckpt's have a built-in dictionary called 'state_dict' - which is what is necessary to load before conversion - it is my understanding that the keys and values of that dictionary are the weights of the model itself.
    Make sure what you are trying to convert is a real fine-tuned model and not something else and if it is then I guess you have to make the necessary changes on your own. You can try loading the whole .ckpt by removing '["state_dict"]' then make a loop to print what is inside of it to understand what you need to change.
    If you can tell me where to find the file you are trying to convert then I can take a look into it.

    @Magnum
    Yes. I've integrated that feature myself but haven't actually checked how well those negative tags are affecting the results. But overall it seems to be working just fine.
    Check out this link: https://github.com/huggingface/diffusers/pull/549/files
    The file at the bottom is the ONNX pipeline. In your own pipeline, remove the code that shows in red and add the green code. Your file is located at: ...\virtualenv\Lib\site-packages\diffusers\pipelines\stable_diffusion\
    Then to use it you need to add some string containing the negative tags and pass it as an argument like so:

    negativePrompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry"
    ...
    image = pipe([Your Other Args Here], negative_prompt=negativePrompt).images[0]
  126. Magnum
    Sun, Oct 16, 2022, 23:38:35
    @Luxion
    Oh wow, that worked like a charm. My results are orders of magnitude better and people don't have 3 legs anymore.
  127. Richard
    Sun, Oct 16, 2022, 23:48:59
    I'm unable to run the program due to "not enough memory resources" despite having plenty of both RAM and VRAM available.
    ```
    2022-10-16 16:46:29.2725618 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-16 16:46:35.0815525 [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::::operator ()] Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\ExecutionProvider.cpp(563)\onnxruntime_pybind11_state.pyd!00007FFC4C635FC1: (caller: 00007FFC4C635D62) Exception(2) tid(375c) 8007000E Not enough memory resources are available to complete this operation.
    Traceback (most recent call last):
    File "C:\stable-diffusion-amd\txt2img.py", line 2, in
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 383, in from_pretrained
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 182, in from_pretrained
    return cls._from_pretrained(
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 151, in _from_pretrained
    model = OnnxRuntimeModel.load_model(os.path.join(model_id, model_file_name), provider=provider)
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 68, in load_model
    return ort.InferenceSession(path, providers=[provider])
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
    File "C:\stable-diffusion-amd\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
    onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\ExecutionProvider.cpp(563)\onnxruntime_pybind11_state.pyd!00007FFC4C635FC1: (caller: 00007FFC4C635D62) Exception(2) tid(375c) 8007000E Not enough memory resources are available to complete this operation.
    ```
  128. Bob
    Mon, Oct 17, 2022, 02:17:31
    Does anyone else have an issue where your entire graphics card essentially just shuts off randomly during some renders? It does that for me and then sends my computer into an endless power cycle. It is rare but it does happen.
  129. Astryx204
    Mon, Oct 17, 2022, 02:37:37
    @Bob i am having the same issue. my screen just goes black sometimes. it'll do a certain amount of iterations and then it'll just kill my GPU. i have to restart my PC using the power button when that happens

    Anyone know what to do? this issue makes this unusable
  130. Sergey
    Wed, Oct 19, 2022, 14:53:44
    Recent updates to diffusers repository made img2img and inpaint possible. Either wait for a package release (and run pip upgrade diffusers) or manually hack changes to onnx_pipeline stuff.
  131. Richard
    Wed, Oct 19, 2022, 16:24:31
    im encoutering some errors:

    (virtualenv) C:\Users\Richard\Desktop>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\Richard\Desktop\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    How to fix this?
  132. Richard
    Wed, Oct 19, 2022, 16:32:14
    tried running pip install onnx

    and this happened:

    (virtualenv) C:\Users\Richard\Desktop>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\Richard\Desktop\convert_stable_diffusion_checkpoint_to_onnx.py", line 24, in
    from diffusers import OnnxStableDiffusionPipeline, StableDiffusionPipeline
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers' (C:\Users\Richard\Desktop\virtualenv\lib\site-packages\diffusers\__init__.py)

    how to fix this issue?
  133. Ricard
    Wed, Oct 19, 2022, 16:47:19
    If anyone is stuck on this error:
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers'

    You have to replace all instances of OnnxStableDiffusionPipeline to StableDiffusionOnnxPipeline.

  134. Christian
    Thu, Oct 20, 2022, 00:28:42
    Getting the following error. Anyone seen this before or know how to fix it?
    File "convert_stable_diffusion_checkpoint_to_onnx.py", line 181, in convert_models
    onnx_pipeline = StableDiffusionOnnxPipeline(
    TypeError: __init__() got an unexpected keyword argument 'vae_encoder
  135. Richard
    Thu, Oct 20, 2022, 12:50:22
    Ricard
    Wed, Oct 19, 2022, 16:47:19
    If anyone is stuck on this error:
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers'

    You have to replace all instances of OnnxStableDiffusionPipeline to StableDiffusionOnnxPipeline.

    So sorry im not familiar with the coding, how do i do this?
  136. Luxion
    Thu, Oct 20, 2022, 13:53:58
    @Richard
    Long story short you probably only need to edit the convert script because the diffusers library that you downloaded is still the same as everybody else who followed this guide which is the v0.3.0. Only the convert script is different on your end because it was updated and they changed the name of the pipeline - for whatever reason.
    So simply open the convert script with a text editor, do CTRL+F and replace:
    OnnxStableDiffusionPipeline
    with:
    StableDiffusionOnnxPipeline
    Then save and try again.
  137. Luxion
    Thu, Oct 20, 2022, 14:03:19
    @Richard, @Christian
    I found some differences in the script that might cause problems so to make things simpler for you guys I posted my v0.3.0 script on pastebin:
    https://pastebin.com/15HcfTXZ
    Replace the content of your script with that code and it should work.

    @Neil
    The convert script is outdated and no longer works for this tutorial, plus they have finally released the img2img and inpaint pipelines.
    We still don't have a UI but now might be a good time to make a new guide - if you can :D
  138. Nathan
    Thu, Oct 20, 2022, 14:46:11
    What on earth have I done here?

    I tried to run: py convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"\

    After a TON of warnings, I ended up with the same error as Christian:
    Traceback (most recent call last):
    File "C:\Stable Diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 227, in
    convert_models(args.model_path, args.output_path, args.opset)
    File "C:\Stable Diffusion\virtualenv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "C:\Stable Diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 186, in convert_models
    onnx_pipeline = StableDiffusionOnnxPipeline(
    TypeError: StableDiffusionOnnxPipeline.__init__() got an unexpected keyword argument 'vae_encoder'

    Here's a paste of all of the errors: https://pastebin.com/seKs5dX4

    How can I fix it?
  139. Kuojin
    Fri, Oct 21, 2022, 01:05:59
    Not sure why, but when I try to paste in my token, it doesn't let me do so. I've tried this on Powershell and CMD. Any ideas? Thanks :)
  140. Kuojin
    Fri, Oct 21, 2022, 02:02:24
    Managed to figure out the login issue. Now I"m getting the following error when running: python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"

    Traceback (most recent call last):
    File "C:\Users\Brian\Documents\stablediffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 24, in
    from diffusers import OnnxStableDiffusionPipeline, StableDiffusionPipeline
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers' (C:\users\brian\Documents\stablediffusion\virtualenv\lib\site-packages\diffusers\__init__.py)

    No idea what is causing this one, any help would be appreciated. Thanks :)
  141. Kuojin
    Fri, Oct 21, 2022, 02:03:54
    Thanks John, will give it a go :)
  142. Kuojin
    Fri, Oct 21, 2022, 02:05:59
    Hmm, seems that that was the build that I have been using, so, no joy on that front.
  143. John
    Fri, Oct 21, 2022, 05:06:38
    If you want this to work, don't download the latest nightly build of ORT. Stick to the Sept. 8 build suggested in the tutorial.
  144. TeaAndBread
    Fri, Oct 21, 2022, 12:18:46
    Been following this guide with basically no coding knowledge, but now i seem to get stuck at the finish line. When i try to run text2img.py the action gets aborted with this error:
    File "X:\Programme\DiffusionEnvironment\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 218, in __init__
    super().__init__(
    TypeError: OnnxStableDiffusionPipeline.__init__() missing 1 required positional argument: 'vae_encoder'

    Anyone know what might be the cause of this? I assumed it was a problem with the vae_encoder in the stable_diffusion_onx folder, but that seems alright, and contains a file named model (133mb), same as the onnx build on huggingface
  145. Srini
    Sun, Oct 23, 2022, 10:34:44
    hi, I doing the utility script and I keep getting this. do you know what to do?
    (virtualenv) PS C:\Users\datta> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\datta\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'
    (virtualenv) PS C:\Users\datta>
  146. Srini
    Sun, Oct 23, 2022, 10:34:48
    hi, I doing the utility script and I keep getting this. do you know what to do?
    (virtualenv) PS C:\Users\datta> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\datta\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'
    (virtualenv) PS C:\Users\datta>
  147. srini
    Sun, Oct 23, 2022, 10:58:52
    I was able to fix the previous error but got a new one.
    (virtualenv) PS C:\Users\datta> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_errors.py", line 213, in hf_raise_for_status
    response.raise_for_status()
    File "C:\Users\datta\virtualenv\lib\site-packages\requests\models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
    requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/models/CompVis/stable-diffusion-v1-4/revision/main

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
    File "C:\Users\datta\convert_stable_diffusion_checkpoint_to_onnx.py", line 215, in
    convert_models(args.model_path, args.output_path, args.opset)
    File "C:\Users\datta\virtualenv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "C:\Users\datta\convert_stable_diffusion_checkpoint_to_onnx.py", line 73, in convert_models
    pipeline = StableDiffusionPipeline.from_pretrained(model_path, use_auth_token=True)
    File "C:\Users\datta\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 288, in from_pretrained
    cached_folder = snapshot_download(
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 98, in inner_f
    return f(*args, **kwargs)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_validators.py", line 94, in _inner_fn
    return fn(*args, **kwargs)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\_snapshot_download.py", line 157, in snapshot_download
    repo_info = _api.repo_info(
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_validators.py", line 94, in _inner_fn
    return fn(*args, **kwargs)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\hf_api.py", line 1491, in repo_info
    return method(
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_validators.py", line 94, in _inner_fn
    return fn(*args, **kwargs)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_deprecation.py", line 98, in inner_f
    return f(*args, **kwargs)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\hf_api.py", line 1289, in model_info
    hf_raise_for_status(r)
    File "C:\Users\datta\virtualenv\lib\site-packages\huggingface_hub\utils\_errors.py", line 254, in hf_raise_for_status
    raise HfHubHTTPError(str(HTTPError), response=response) from e
    huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: 4O6kTK075qkZG_6Ei-bhN)
  148. Colin
    Sun, Oct 23, 2022, 17:51:03
    Unfortunately this post aged well, I too expected a better and richer experience to be available on AMD/Windows rather quickly, but looking at the ml ecosystem there just isn't anything that's even a little bit promising, AMD for ml is flakey at best, and that's Rocm on Linux, there is about nothing for Windows and what is there is in its infancy; pytorch directml exists but it currently supports an ancient python and pytorch version (it works for tensorflow though), there is something called orichi by AMD, supposedly allowing the use of ROCm on windows, but it needs to be build into pytorch/tensorflow or jax, and it's unclear (probably simply not) how optimised libraries like MIGraphX work with it, and then there are a bunch of inference engines using vulkan compute like the mentioned onnxruntine. I'd not hold my breath for the situation to improve much in the near future, especially if you expect a solution that can keep up with the latest developments like hypernetworks/dreambooth/clip guidance as these all need to be implemented for your AMD/Windows which isn't pytorch. The first thing available will likely be pytorch with the directML, it works, but looking at tensorflow with directml vs tensorflow/Rocm on Linux the preformance will be rather suboptimal.
  149. Phil Z
    Mon, Oct 24, 2022, 17:57:04
    I am not too well versed in the python ways but it seems that it has issues access the face hugger website.
    I can go in and download the ckpt file just fine.
    Is there a script edit I can use to get that ckpt file for the conversion script?
  150. Jade
    Mon, Oct 24, 2022, 22:52:52
    I'm stuck at the conversion of Stable Defussion (100% downloaded). Getting these errors and I have no idea what I should try now (deleting huggingface and virtualenv folders and redownloading didn't help)
    https://i.imgur.com/ICilzYC.jpeg
    //
    https://justpaste.it/96442
  151. Jade
    Tue, Oct 25, 2022, 10:06:43
    New day, different errors.
    https://justpaste.it/3zp02
  152. Nathan
    Tue, Oct 25, 2022, 15:38:51
    Ok, so I got it running, but every time I run it, I get these messages. What is going on?

    2022-10-25 23:36:44.2193970 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-25 23:36:45.8210854 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-25 23:36:45.8257960 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    2022-10-25 23:36:53.0879966 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-25 23:36:53.1552166 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-25 23:36:53.1585294 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
    2022-10-25 23:36:53.9236496 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-25 23:36:54.1009858 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-25 23:36:54.1057708 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
    2022-10-25 23:36:55.5543313 [W:onnxruntime:, inference_session.cc:492 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider.
    2022-10-25 23:36:55.9146185 [W:onnxruntime:, session_state.cc:1030 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.
    2022-10-25 23:36:55.9190611 [W:onnxruntime:, session_state.cc:1032 onnxruntime::VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.
  153. Christian
    Tue, Oct 25, 2022, 18:58:19
    @Luxion Thanks!
  154. Jason
    Thu, Oct 27, 2022, 02:40:47
    Thank you so much for this! I have a 5700XT and a friend told me about stable diffusion but it only used CPU. I'm about done but the last text2img.py thing has a type error: 'OnnxStableDiffusionPipeline.__init__() missing 1 required positional argument: 'vae_encoder''

    I'll figure it out soon enough, I appreciate all of the good information though!
  155. Jason
    Thu, Oct 27, 2022, 04:48:43
    I fixed my previous issue by adding 'vae_encoder=vae_encoder,' to line 245 and 'vae_encoder: OnnxRuntimeModel,' to live 232 of pipeline_onnx_stable_diffusion.py (located in directory: stable-diffusion-webui\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\)

    if anyone else runs into that issue :)
  156. dabu
    Thu, Oct 27, 2022, 18:30:28
    I could fix 'OnnxStableDiffusionPipeline.__init__() missing 1 required positional argument: 'vae_encoder' by commenting out line 197:
    #vae_encoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_encoder"),
  157. HeadBanging
    Sat, Oct 29, 2022, 04:47:54
    For those with issues stating StableDiffusionOnnxPipeline.__init__() got an unexpected keyword argument 'vae_encoder'

    you need to remove the following line in the convert_stable_diffusion_checkpoint_to_onnx.py (line 197)

    vae_encoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_encoder"),

    Apparently, this parameter is no longer accepted to the StableDiffusionOnnxPipeline function call
  158. HeadBanging
    Sat, Oct 29, 2022, 04:47:55
    For those with issues stating StableDiffusionOnnxPipeline.__init__() got an unexpected keyword argument 'vae_encoder'

    you need to remove the following line in the convert_stable_diffusion_checkpoint_to_onnx.py (line 197)

    vae_encoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_encoder"),

    Apparently, this parameter is no longer accepted to the StableDiffusionOnnxPipeline function call
  159. HeadBanging2
    Sun, Oct 30, 2022, 05:54:03
    I'm missing something who generates the model_index.json I don't get it, it does not exist in my stable_diffusion_onnx folder
  160. ....
    Mon, Oct 31, 2022, 14:32:26
    I apreciate your work you put into this, but my god does it have a lot of problems.
    Anyway, i got to the text2image.py script and got this error after fixing the vae_encoder one:

    ValueError: Pipeline expected {'unet', 'scheduler', 'text_encoder', 'tokenizer', 'safety_checker', 'vae_encoder', 'vae_decoder', 'feature_extractor'}, but only {'unet', 'scheduler', 'text_encoder', 'tokenizer', 'safety_checker', 'vae_decoder', 'feature_extractor'} were passed.

    Do any of you know how to fix it? It seems that sometimes the code doesnt want the vae_encoder and and sometimes it crashes without it. So is the variable even needed? And if not how do i remove all refferences to it.
  161. sup bros
    Mon, Oct 31, 2022, 19:36:38
    soup /b/ will check back later for ebic 1 click installation
  162. John
    Tue, Nov 01, 2022, 18:02:38
    GPU drivers crashes at the end of the image iterations,

    2022-11-01 18:59:18.2754295 [E:onnxruntime:, sequential_executor.cc:369 onnxruntime::SequentialExecutor::Execute] Non-zero status code returned while running MemcpyToHost node. Name:'Memcpy_token_33' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\MLOperatorAuthorImpl.cpp(1978)\onnxruntime_pybind11_state.pyd!00007FFE9C26C16F: (caller: 00007FFE9C97DDAF) Exception(3) tid(17d0) 887A0006 La GPU non risponde a pi²  comandi. ²  probabile che sia stato passato un comando non valido dall'applicazione chiamante.

    Traceback (most recent call last):
    File "D:\AI\Stable_Diffusion\text2img.py", line 8, in
    image = pipe(prompt, num_inference_steps=10).images[0]
    File "D:\AI\Stable_Diffusion\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_onnx.py", line 150, in __call__
    image = self.vae_decoder(latent_sample=latents)[0]
    File "D:\AI\Stable_Diffusion\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 51, in __call__
    return self.model.run(None, inputs)
    File "D:\AI\Stable_Diffusion\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
    onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException
  163. Zaniet
    Thu, Nov 03, 2022, 16:58:45
    Too be clear I have no idea what I'm doing just thought this was cool and wanted to give it a shot. Everything is well put together and easy enough to follow even with zero coding experience. Anyway I'm stuck at the 6gb download part, after a bit of trail and error i got everything to download but then directly after it tells me ({'vae_decoder'} was not found in config. Values will be initialized to default values.) so i have no idea where to go from here. Was still fun enough to figure out to this point might give it another shot later.
  164. Zaniet
    Thu, Nov 03, 2022, 17:00:58
    OH HeadBanging just a bit above this gave an answer. Dammit qwp
  165. Aaron
    Fri, Nov 04, 2022, 17:30:49
    Trying to folow this i get ModuleNotFoundError: No module named 'onnx' when trying to run the script. I tried installing onnx via pip but hen i get ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers'
  166. Pitonne
    Sat, Nov 05, 2022, 00:52:11
    Hi, I'm stuck on the
    python C:\Users\23082\virtualenv/convert_stable_diffusion_checkpoint_to_onnx.py --model_path="C:\Users\23082\virtualenv" --output_path="./stable_diffusion_onnx"
    part.
    File "C:\Users\23082\virtualenv\lib\site-packages\diffusers\configuration_utils.py", line 201, in get_config_dict
    raise EnvironmentError(
    OSError: Error no file named model_index.json found in directory C:\Users\23082\virtualenv.

    it seem that i'm missing a file, but I'm sure I have done everything properly
    note i had those issues before:
    ->file not in the right directory
    ->cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers'
    -> missing onnx module
    all solved by you guys (thanks a lot) but here i'm stuck
    any help ?
  167. hex
    Sat, Nov 05, 2022, 04:16:47
    @Pitonne that is the second script you have to run. The first is convert_original_stable_diffusion_to_diffusers.py -- did you do that? Here's a .bat file that I created that converts all .ckpt files in a directory to onnx.

    for %%f in (*.ckpt) do (
    python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./%%f" --dump_path="./%%~nf-diff"
    IF ERRORLEVEL 1 GOTO errorHandling
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="%%~nf-diff" --output_path="%%~nf-onnx"
    if ERRORLEVEL 1 GOTO errorHandling
    )

    :errorHandling

    Simply put the .bat file you created with the script above, the ckpt file(s) and two python scripts in the same directory and run the .bat file. It will convert ALL ckpt files in the directory, so you can do more than one. The output directories are the ones with -onnx in the name, you can delete the directories with a -diff in the name. You can move the -onnx directories wherever you want after that. Hope that helps.
  168. Pitonne
    Sat, Nov 05, 2022, 14:19:49
    Thanks @hex for the answer, I don't know if it work, I have instead follow the tutorial from the start once again and this time it worked a bit further, have it download but then it started having trouble with tensor converting into boolean, it seems that it's not a problem
    and then :
    AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\onnx_utils.py", line 63, in load_model
    return ort.InferenceSession(path, providers=[provider], sess_options=sess_options)
    NameError: name 'ort' is not defined. Did you mean: 'oct'?

    should i replace all 'ort' by 'oct' ?

  169. Dee
    Thu, Nov 10, 2022, 19:58:13
    Is is normal that is uses 12GB of RAM even thought it already uses 8GB of VRAM?
  170. Daniel
    Sat, Nov 12, 2022, 06:03:08
    Regarding to the first comment from @Stephen, I was able to fix that same error I was having by making sure my AMD drivers were up to date. For some reason, after a Window's update, my drivers magically seemed to have either been deleted or become unusable and my card was disabled in Device Manager. After reenabling it and redownloading drivers, it started working.
  171. Fang
    Sat, Nov 12, 2022, 21:30:56
    I keep crashing when attempting to run the text2img script...
    The error I keep getting references the "onnxruntime_inference_collection.py" script on line 200 in run
    return self._sess.run(output_names, input_feed, run_options)
    ONNXRuntimeError 2 INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int64)) , expected: (tensor(float))
    Anyone have an idea what I could try to fix this error?
  172. Leo
    Sat, Nov 12, 2022, 22:49:29
    I am currently trying to set this up and I have a error when I try to run the text2img.py
    Traceback (most recent call last):
    File "C:\Users\MyUserName\stable-diffusion\text2img.py", line 6, in
    image = pipe(prompt).images[0]
    File "C:\Users\IsNone\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_onnx.py", line 132, in __call__
    noise_pred = self.unet(
    File "C:\Users\OfYour\stable-diffusion\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 51, in __call__
    return self.model.run(None, inputs)
    File "C:\Users\Business\stable-diffusion\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 200, in run
    return self._sess.run(output_names, input_feed, run_options)
    onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Unexpected input data type. Actual: (tensor(int64)) , expected: (tensor(float))

    If anyone can help me, it would be greatly appreciated.
  173. Clawheart
    Sun, Nov 13, 2022, 03:28:05
    I am experiencing the same error, it is likely related to a recent update that was don to the code. it was working fine two days ago.
  174. adysloud
    Mon, Nov 14, 2022, 05:18:24
    Hi, I received this error when trying to run the python program.

    [E:onnxruntime:, inference_session.cc:1484 onnxruntime::InferenceSession::Initialize::::operator ()] Exception during initialization: D:\a\_ work\1\s\onnxruntime\core\providers\dml\DmlExecutionProvider\src\ExecutionProvider.cpp(563)\onnxruntime_ pybind11_ state. pyd! 00007FFD43526051: (caller: 00007FFD43525DF2) Exception(2) tid(20bc) 887A0005 GPU ? Traceback (most recent call last):
    File "E:\AI\stable-diffusion-amd\text2img.py", line 2, in
    pipe = StableDiffusionOnnxPipeline.from_ pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 628, in from_ pretrained
    loaded_ sub_ model = load_ method(os.path.join(cached_folder, name), **loading_ kwargs)
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 182, in from_ pretrained
    return cls._ from_ pretrained(
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 149, in _ from_ pretrained
    model = OnnxRuntimeModel.load_ model(
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\diffusers\onnx_utils.py", line 63, in load_ model
    return ort. InferenceSession(path, providers=[provider], sess_options=sess_options)
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in __ init__
    self._ create_ inference_ session(providers, provider_options, disabled_optimizers)
    File "E:\AI\stable-diffusion-amd\virtualenv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _ create_ inference_ session
    sess. initialize_ session(providers, provider_options, disabled_optimizers)
    onnxruntime.capi.onnxruntime_ pybind11_ state.RuntimeException

    It seems that an error occurred during initialization. I tried to retrieve relevant information online, but no solution was found. Does anyone know what I should do?

    My graphics device: AMD Radeon HD 8570M
  175. VVulter
    Mon, Nov 14, 2022, 12:22:23
    This is what I get using python text2img.py

    Traceback (most recent call last):
    File "D:\stable-diffusion\text2img.py", line 12, in
    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    File "D:\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 300, in from_pretrained
    config_dict = cls.get_config_dict(cached_folder)
    File "D:\stable-diffusion\virtualenv\lib\site-packages\diffusers\configuration_utils.py", line 201, in get_config_dict
    raise EnvironmentError(
    OSError: Error no file named model_index.json found in directory ./stable_diffusion_onnx.
  176. Luxion
    Wed, Nov 16, 2022, 16:32:43
    If you have trouble using the latest nightly version of onnx directml and get a "NameError: name 'ort' is not defined. Did you mean: 'oct'?" error, then follow this mini guide to get it working:

    (activate the environment)
    (make sure you have an internet connection)
    pip uninstall onnx
    pip uninstall onnxruntime
    pip uninstall ort_nightly_directml
    pip uninstall onnxruntime-directml
    (place the latest .whl for your version of python at the same location where the 'virtualenv' folder is)
    pip install ./you_downloaded_latest_ort_nightly_directml_file.whl --force-reinstall

    -- Attention: if you get an error notification about 'protobuf' do these commands as well (in this order):
    pip uninstall protobuf
    pip uninstall tensorboard
    pip install tensorboard
    (when installing tensorboard at this point it will automatically install a compatible version of protobuf for you)

    Now you must go to .\virtualenv\Lib\site-packages\diffusers\utils\ and open the "import_utils.py" file with a text editor.
    Edit this line:
    candidates = ("onnxruntime", "onnxruntime-gpu", "onnxruntime-directml", "onnxruntime-openvino")
    To this:
    candidates = ("onnxruntime", "onnxruntime-gpu", "onnxruntime-directml", "onnxruntime-openvino", "ort_nightly_directml")

    Save the file and you should be ready to roll.
  177. [email protected]
    Sat, Nov 19, 2022, 23:03:45
    I have some problems in the part "import onnx
    ModuleNotFoundError: No module named 'onnx' "
  178. Luxion
    Sun, Nov 20, 2022, 18:11:04
    @gabriel
    Usually the solution to 'No module named xxxx' is to run 'pip install xxxx'. In this case just run: 'pip install onnx' - you probably only need it for the diffusers to onnx converter.
  179. [email protected]
    Wed, Nov 23, 2022, 14:29:32
    I've tried to run it written in so many ways, tried making a neww folder. nothing works. every time i get this message;
    (virtualenv) PS C:\Users\Greencom> pip install Downloads:\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64 --force-reinstall
    ERROR: Invalid requirement: 'Downloads:\\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64'
    Hint: It looks like a path. File 'Downloads:\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64' does not exist.

    am i missing something? i've tried to look it up, but all i can find is that powershell/pip is unable to locate the file. but i cant figure out why..
  180. [email protected]
    Wed, Nov 23, 2022, 14:50:17
    I have also tried this as an example;
    (virtualenv) PS C:\Users\Greencom> pip install Downloads:\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64.whl --force-reinstall
    WARNING: Requirement 'Downloads:\\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64.whl' looks like a filename, but the file does not exist
    ERROR: Could not install packages due to an OSError: Bad path: C:\Users\Greencom\Downloads:\ort_nightly_directml-1.13.0.dev20220908001-cp310-cp310-win_amd64.whl
  181. EMokro
    Sat, Nov 26, 2022, 01:49:39
    Thanks :)
    "pip install diffusers==0.3.0" I had to take a newer version. Now I'm on 0.9.0
  182. Georg
    Wed, Nov 30, 2022, 12:34:40
    Hi Neil!
    Ive stumbled upon a CUDA emulation software that works via injection. Its supposed to make CUDA usable on other GPUs, especially AMD's. Its been released in 2010 and I can't test it due to lack of an AMD GPU, but would you be willing to give this a try and see if it actually makes SD usable in an easier fashion?
    https://www.techpowerup.com/119073/nvidia-cuda-emulator-for-every-pc

    Im currently running an Nvidia GTX 1070 and Im thinking about getting a RX 6700 XT, ideally without losing the comfort in using SD with tools like NKMDs SD GUI (which also provides one-click installation of SD itself).
  183. Georg
    Thu, Dec 01, 2022, 18:14:30
    Hi again, turns out the CUDA Emulator was an April fool's joke (that I didnt realize bc it crashed with a compatibility warning, showing the solution only after clicking "ignore") -.-
  184. Georg
    Thu, Dec 01, 2022, 18:15:46
    (hit submit too early)
    however, running SD without Nvidia GPU via the NKMD GUI actually works, it just makes it run on the CPU (a friend of mine tested it). Its quite a bit slower that way tho.
  185. evlcpy
    Fri, Dec 02, 2022, 03:15:05
    I was able to begin downloading the onnx'd version of stable diffusion, however I got an error:

    Traceback (most recent call last):
    File "S:\stablediffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 266, in
    convert_models(args.model_path, args.output_path, args.opset, args.fp16)
    File "S:\stablediffusion\virtualenv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "S:\stablediffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 80, in convert_models
    pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
    File "S:\stablediffusion\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 373, in from_pretrained
    load_method = getattr(class_obj, load_method_name)
    TypeError: getattr(): attribute name must be string

    --

    googling says this error pops up when you switch off the safety checker
    but I've literally been reading the pipeline_utils.py with load_method and i'm like completely unable to find a shred of an idea where to go from here.
  186. Slitka
    Fri, Dec 02, 2022, 19:39:04
    Getting same error evlcpy
  187. Facces
    Sat, Dec 03, 2022, 04:24:24
    Same error as evlcpy/Slitka
  188. waaaaaagh
    Sun, Dec 04, 2022, 01:24:54
    Same error as the three people above. This is getting tiresome, error after error after error after error...
  189. ehuiu
    Sun, Dec 04, 2022, 13:36:46
    I've been buying amd gpus for the past 10 years but all this fuckery is really making me want to switch to nvidia.
  190. string
    Sun, Dec 04, 2022, 13:39:52
    for the "attribute name must be string" error, try pip install diffusers==0.8.0
  191. Facces
    Sun, Dec 04, 2022, 18:42:38
    Traceback (most recent call last):
    File "E:\stable-diffusion\text2img.py", line 1, in
    from diffusers import StableDiffusionOnnxPipeline
    ModuleNotFoundError: No module named 'diffusers'

    Getting this error
  192. greg_114
    Mon, Dec 05, 2022, 07:05:12
    there I setup a new version of diffusers==0.9.0, but I got new err :

    RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  193. Facces
    Mon, Dec 05, 2022, 14:41:29
    Ya, now im getting this error

    Traceback (most recent call last):
    File "E:\stable-diffusion\text2img.py", line 2, in
    pipe = StableDiffusionOnnxPipeline.from_pretrained("E:\stable-diffusion\stable_diffusion_onnx", provider="DmlExecutionProvider")
    File "E:\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 492, in from_pretrained
    config_dict = cls.load_config(cached_folder)
    File "E:\stable-diffusion\virtualenv\lib\site-packages\diffusers\configuration_utils.py", line 318, in load_config
    raise EnvironmentError(
    OSError: Error no file named model_index.json found in directory E:\stable-diffusion\stable_diffusion_onnx.
  194. mark
    Thu, Dec 08, 2022, 10:36:11
    it is impossible to change attents = get_latents_from_seed(seed, 512, 512)

    If I put another resolution, it gives an error of I don't know what divided by 8, what is the solution to generate a 1024x768 image? for example nothing works that is not 512x512
  195. Facces
    Thu, Dec 08, 2022, 12:22:29
    my installation is working now, but its outdated, and i dont knw how to update
  196. 23232
    Thu, Dec 08, 2022, 12:38:41
    Made a guide here. https://rentry.org/7ux9m
  197. m8ax
    Fri, Dec 09, 2022, 01:40:43
    ##################################################################################################################################################################################################################################
    # Programa En Python Creado Por MARCOS OCHOA DIEZ - M8AX - MvIiIaX - http://youtube.com/m8ax - https://oncyber.io/@m8ax
    # Programa Para Crear Imágenes Mediante Stable Diffusion, Una Inteligencia Artificial Generadora De Imágenes, Mediante GPU AMD, En El Caso De Este Programa En Python...
    # En Mi Caso Lo He Probado En La GPU Integrada Del Propio AMD RYZEN 4800H Y Consigue Multiplicar La Velocidad Por 10, Con Respecto A Usar La CPU Para La Generación De Imágenes Y No Es Cualquier CPU... 🤣
    #
    # Ejemplo 1 - py m8ax_gpu_sd.py --help
    #
    # Muestra Los Comandos Que Se Pueden Añadir En La Linea De Comandos...
    #
    # Ejemplo 2 - py m8ax_gpu_sd.py -p "red monster with hat"
    #
    # Crea Imágen De 512X512 Con 25 Pasos Por Defecto, Escala Por Defecto También A 7.5, Semilla Aleatoria Y Crea Solo Una Imágen...
    #
    # Ejemplo 3 - py m8ax_gpu_sd.py -p "red monster with hat" -w 512 -h 512 -st 20 -g 7 -s 5000000 -c 5
    #
    # Crea 5 Imágenes De 512X512 Con 20 Pasos, Escala A 7, 5000000 De Semillas... Al Añadir El Comando -c 5, Las Semillas Seran En Cada Imágen Aleatorias Aunque Esté Una Especificada... -c 5 = Crear 5 Imágenes.
    # Al Crear Varias Imágenes Si Los Valores De -st y -g Están Especificados, Solo Las Semillas Serán Aleatorias Y Si No Están Especificados, Serán Los De Por Defecto, Salvo Un Caso Concreto...
    # Si -st Es 0, Steps Seran Aleatorios De 20 A 100. Si -g Es 0, Escala Será Aleatoria De 6.5 A 20.5.
    # Si -c Es 0, Se Crearán Solo 10 Imágenes, Cualquiero Otro Número Generará La Cantidad Indicada De Imágenes, No Especificar El Comando, Solo Creará Una Imágen...
    #
    #
    # NOTA IMPORTANTE - El Directorio os.mkdir('M8AX-Imágenes_Finales-M8AX') Podeis Cambiarlo A Lo Que Querais Vosotros, Además Os Recomiendo Cambiarlo Porque Sino, No Os Funcionará, A No Ser Que Cambieis También
    # La Siguiente Línea De Código - os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX'), Creo Que Eso No Necesita Explicación Alguna... ¡¡¡ A Disfrutar !!!
    #
    # By M8AX On December 09 2022 01:20h
    #
    ##################################################################################################################################################################################################################################

    from os import remove
    from PIL import Image, ImageChops, ImageEnhance, ImageOps
    from diffusers import StableDiffusionOnnxPipeline
    from datetime import datetime, timedelta
    import os
    import errno
    import time
    import math
    import click
    import numpy as np

    @click.command()
    @click.option("-p", "--prompt", required=True, type=str)
    @click.option("-w", "--width", required=False, type=int, default=512)
    @click.option("-h", "--height", required=False, type=int, default=512)
    @click.option("-st", "--steps", required=False, type=int, default=25)
    @click.option("-g", "--guidance-scale", required=False, type=float, default=7.5)
    @click.option("-s", "--seed", required=False, type=int, default=None)
    @click.option("-c", "--cuan", required=False, type=int, default=None)

    def run(
    prompt: str,
    width: int,
    height: int,
    steps: int,
    cuan: int,
    guidance_scale: float,
    seed: int):

    pipe = StableDiffusionOnnxPipeline.from_pretrained(
    "./stable_diffusion_onnx",
    provider="DmlExecutionProvider"
    )

    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))

    seed = np.random.randint(np.iinfo(np.int32).max) if seed is None else seed
    latents = get_latents_from_seed(seed, width, height)

    cuan = 1 if cuan is None else cuan

    try:
    os.mkdir('M8AX-Imágenes_Finales-M8AX')
    except OSError as e:
    if e.errno != errno.EEXIST:
    raise

    xsteps=5
    xguidance_scale=5

    if steps==0:
    xsteps=1

    if guidance_scale==0:
    xguidance_scale=1

    if cuan==0:
    cuan=10

    num_images = 0;
    iniciot = time.time()

    while (num_images < cuan):

    if cuan>1:
    seed = np.random.randint(np.iinfo(np.int32).max)
    latents = get_latents_from_seed(seed, width, height)
    if xsteps==1:
    steps=np.random.randint(20, 100)
    if xguidance_scale==1:
    guidance_scale=math.floor(np.random.uniform(low=6.5, high=20.5, size=(1,1))*10)/10
    inicio = time.time()
    num_images=num_images+1
    print(f"\nUsando {seed}, Semillas Para La Generación De La Imágen...\n")
    image = pipe(prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=guidance_scale, latents=latents).images[0]
    image.save("Salida.PnG")
    image = Image.open("Salida.PnG")
    image = ImageEnhance.Sharpness(image).enhance(3)
    image.save("Fin.PnG")
    image = Image.open('Fin.PnG')
    image = image.resize((2048,2048))
    os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX')
    nombreimg=time.strftime("%d_%m_%y")+"-"+time.strftime("%H_%M_%S") +"---Res-"+str(width)+" x "+str(height)+"__Steps-"+str(steps)+"__GScale-"+str(guidance_scale)+"__Seed-"+str(seed)+"__FinalResNum-"+'2048 x 2048-----'+str(num_images)+'.PnG'
    image.save(time.strftime("%d_%m_%y")+"-"+time.strftime("%H_%M_%S") +"---Res-"+str(width)+" x "+str(height)+"__Steps-"+str(steps)+"__GScale-"+str(guidance_scale)+"__Seed-"+str(seed)+"__FinalResNum-"+'2048 x 2048-----'+str(num_images)+'.PnG','png')
    os.chdir('E:\\Stable_Diffusion')
    remove("Salida.PnG")
    remove("Fin.PnG")
    fin = time.time()
    en1hora=(3600*num_images)/(fin-iniciot)
    print(f"\nNombre De Fichero Resultante:\n\n{nombreimg}")
    print(f"\nFin De Generación De Imágen. Tiempo De Cálculo - {fin-inicio} Segundos...")
    print(f"\nRitmo De Generación De Imágenes Por Hora - {en1hora} Imágenes/Hora...")

    fint = time.time()
    tiempot=fint-iniciot
    convertido=segahms(tiempot)
    print(f"\nFin De Generación De {cuan} Imágenes. Tiempo Total De Cálculo Intensivo De GPU - {tiempot} Segundos...\n\nTiempo Total De Cálculo Intensivo De GPU --- {convertido}\n\nBy M8AX - http://youtube.com/m8ax\n\nPodeis Suscribiros, No Me Enfadare... xD")

    def get_latents_from_seed(seed: int, width: int, height:int) -> np.ndarray:
    latents_shape = (1, 4, height // 8, width // 8)
    rng = np.random.default_rng(seed)
    image_latents = rng.standard_normal(latents_shape).astype(np.float32)
    return image_latents

    def segahms(segundos):
    horas = int(segundos / 60 / 60)
    segundos -= horas*60*60
    minutos = int(segundos/60)
    segundos -= minutos*60
    return f"{horas}h:{minutos}m:{int(segundos)}s"

    if __name__ == '__main__':
    run()
  198. m8ax
    Fri, Dec 09, 2022, 01:44:33
    Espero que os funcione y os guste el programa de python... Una pregunta esta instalacion de stable diffusion esta desactualizada? porque ?..

    tambien tengo la normal que si tienes amd te tira de cpu, yo creo que la actualizaran para poder usar sin tener que comerse la cabeza las gpus de amd y generar imagenes de mas resolucion.
  199. m8ax
    Fri, Dec 09, 2022, 01:44:34
    Espero que os funcione y os guste el programa de python... Una pregunta esta instalacion de stable diffusion esta desactualizada? porque ?..

    tambien tengo la normal que si tienes amd te tira de cpu, yo creo que la actualizaran para poder usar sin tener que comerse la cabeza las gpus de amd y generar imagenes de mas resolucion.
  200. m8ax
    Fri, Dec 09, 2022, 01:44:34
    Espero que os funcione y os guste el programa de python... Una pregunta esta instalacion de stable diffusion esta desactualizada? porque ?..

    tambien tengo la normal que si tienes amd te tira de cpu, yo creo que la actualizaran para poder usar sin tener que comerse la cabeza las gpus de amd y generar imagenes de mas resolucion.
  201. m8ax
    Sat, Dec 10, 2022, 01:48:33
    Hola-...
  202. m8ax
    Sat, Dec 10, 2022, 01:49:33
    ##################################################################################################################################################################################################################################
    # Programa En Python Creado Por MARCOS OCHOA DIEZ - M8AX - MvIiIaX - http://youtube.com/m8ax - https://oncyber.io/@m8ax
    # Programa Para Crear Imágenes Mediante Stable Diffusion, Una Inteligencia Artificial Generadora De Imágenes, Mediante GPU AMD, En El Caso De Este Programa En Python...
    # En Mi Caso Lo He Probado En La GPU Integrada Del Propio AMD RYZEN 4800H Y Consigue Multiplicar La Velocidad Por 2.45, Con Respecto A Usar La CPU, Para La Generación De Imágenes Y No Es Cualquier CPU... 🤣
    #
    # Ejemplo 1 - py m8ax_gpu_sd.py --help
    #
    # Muestra Los Comandos Que Se Pueden Añadir En La Linea De Comandos...
    #
    # Ejemplo 2 - py m8ax_gpu_sd.py -p "red monster with hat"
    #
    # Crea Imágen De 512X512 Con 25 Pasos Por Defecto, Escala Por Defecto También A 7.5, Semilla Aleatoria Y Crea Solo Una Imágen...
    #
    # Ejemplo 3 - py m8ax_gpu_sd.py -p "red monster with hat" -w 512 -h 512 -st 20 -g 7 -s 5000000 -c 5
    #
    # Crea 5 Imágenes De 512X512 Con 20 Pasos, Escala A 7, 5000000 De Semillas... Al Añadir El Comando -c 5, Las Semillas Seran En Cada Imágen Aleatorias Aunque Esté Una Especificada... -c 5 = Crear 5 Imágenes.
    # Al Crear Varias Imágenes Si Los Valores De -st y -g Están Especificados, Solo Las Semillas Serán Aleatorias Y Si No Están Especificados, Serán Los De Por Defecto, Salvo Un Caso Concreto...
    # Si -st Es 0, Steps Seran Aleatorios De 20 A 65. Si -g Es 0, Escala Será Aleatoria De 6.5 A 20.5.
    # Si -c Es 0, Se Crearán Solo 10 Imágenes, Cualquiero Otro Número Generará La Cantidad Indicada De Imágenes, No Especificar El Comando, Solo Creará Una Imágen...
    #
    # ATENCIÓN... 3 NUEVAS OPCIONES BRUTALES:
    #
    # 1. Opción -f 5 Por Ejemplo... Creará Al Finalizar, Un Video Con Todas Las Imágenes Generadas, Usando El Codec x264 Formato MP4 Y Un FrameRate De 5 Imágenes Por Segundo...
    #
    # 2. Opción -wo "Tom Cruise" O -wo "Monster With Black Nose" O -wo "Trees", Buscará En LeXica Art Dicho Parámetro Y Cogerá Un Prompt Aleatorio, Tanto Si Estamos Creando Solo Una
    # Imágen Como Varias A La Vez. Si Estamos Creando 500 Cada Una De Ellas Tendra Un Prompt Diferente Pero Con Referencia Al Parametro -wo Que Hemos Puesto. ¿ Mola No ?...
    # Si Vamos A Usar Esta Opcion -wo Podemos Usar La Opción -p "hola", Ya Que Es Obligatorio Ponerla... Pero Asi No Tendremos Que Escribir El Prompt Entero, Ya Que Vamos A Usar
    # La Opcion -wo En Este Caso... Si Lo Que Queremos Es Un Prompt Específico Indicado Por Nosotros, Entonces No Tendremos Que Usar La Opcion -wo Y La Opción -p Tendrá Que Tener Un
    # "Prompt Detallado", Ya Sabeis - p "Detalles De Prompt"... Espero Que Sea Fácil De Entender...
    #
    # 3. Opción -j Para, Activarla -j 1, Para No Usarla... No Ponerla Y Ya Está... Esta Opción Lo Único Que Hace, Es Usar Unas Semillas "seed", Entre Imágen E Imágen, Que Varian Muy Poco, Con El Fin
    # De Que Las Imágenes Que Se Vayan Generando... Cambien Entre Si En Alguna Cosita, Pero No Mucho, Con La Meta De Que Si Ponemos La Opcion -f 25 Y Creamos Las Sufucientes Imagenes, Como Para
    # Que El Video Dure Unos 5 Segundos... Estamos Hablando De 125 Imágenes... Pueden Salir Cosas Chulas... Pero Ya Os Digo Que Es Experimental... Pero Lo Mismo Hace Cosas Guapas...
    #
    # EJEMPLOS: 1. py m8ax_gpu_sd.py -p "red monster" -w 256 -h 256 -st 0 -g 0 -s 50000 -c 100 -f 5 -wo "amd cpu" -j 1
    #
    # -p "red monster" Se Anula, Porque Hemos Añadido -wo "amd cpu" Y Esta Opción, Nos Genera Prompts Aleatorios Que Tengan Que Ver Con "amd cpu". ( REQUIERE INTERNET LA OPCIÓN -wo )
    # Para La Busqueda De Prompts. Sino Usamos Dicha Opción, Podemos Quitar Internet... xD...
    # -w y -h A 256 Creará Cada Imágen Con Una Resolución De 256X256. -st A 0, En Cada Imágen Generada Se Usara Steps Aleatorios De 20 A 65... Es Decir, Una Imágen Puede Crearse Con
    # 20 Steps Y Otra Con 45. La Opción -g a 0, Más De Lo Mismo Pero En Este Caso Los Valores Aleatorios, Se Tomarán Desde 6.5 A 20.5. -g Es Escala. -S 50000, Semillas Anuladas, Porque En La Opción
    # -c Hemos Puesto 100 Y Como Es Más De Una Imágen, Entonces Usaremos Un Seed Diferente, Para Que Las Imágenes No Se Parezcan Mucho... Pero Como Hemos Puesto La Opción -j A 1, Estamos
    # A Su Vez Indicando Que No Queremos Mucha Variación En Los Seed, Porque Queremos Hacer Un Video De Las Imágenes Generadas Y Queremos Que Se Parezcan Algo Las Imágenes Para Hacer Un Video
    # Porque La Opción -f Está A 5 Y Esto Indica Que Queremos Hacer Un Video A 5 Fps De Las 100 Imágenes -c 100... Espero Que Se Entienda Lo Que He Explicado...
    #
    # 2. py m8ax_gpu_sd.py -p "red monster" -st 15 -g 7 -s 50000 -f 5
    #
    # Creará Una Imágen Con El Prompt Red Monster Con 15 Pasos, Escala 7, 50000 Semillas Y Al Final Creará Un Video De La Imágen A 5 Fps.. Lo Cuál Es Una Tonteria... La Opción -f Está
    # Pensada Para Cuando Generemos Más Imágenes.
    #
    #
    # 3. py m8ax_gpu_sd.py -p "red monster" -wo "Julia Roberts"
    #
    # Todos Los Parámetros Por Defecto 512x512, Steps 25, Escala 7.5 Y Semillas Seed, Aleatorias... Crea Una Imágen Con Un Prompt De Lexica Art, Que Tenga Que Ver Con Julia Robets. Red Monster
    # Queda Anulado...
    #
    # 4. py m8ax_gpu_sd.py -p "red monster" -c 200 -f 5
    #
    # 200 Imágenes De 512x512 Con Escala A 7.5, Steps A 25 Y Semillas Aleatorias.
    #
    # 5. py m8ax_gpu_sd.py -p "red monster" -st 20 -g 0 -s 5000
    #
    # Crea Una Imágen Calculada Mediante 20 Pasos Con Escala Aleatoria Y 5000 Semillas, Con El Tema Red Monster Y De 512x512 De Resolución...
    #
    #
    #
    #
    # ESPERO QUE CON ESTOS EJEMPLOS SE ENTIENDA BIEN EL FUNCIONAMIENTO DE ESTE PROGRAMA...
    #
    #
    # NOTAS IMPORTANTES:
    #
    # - El Programa En El Directorio De Trabajo, Donde Crea Las Imágenes Y Demás También Crea Un Fichero TXT Con Los Prompts Usados En Cada Imágen, El Ritmo De Generación De Imágenes Por
    # Hora, Minuto Y Segundo, Etc...
    #
    # - Como Nota, En Mi Portátil, Que Lleva Un Ryzen 4800h Con La GPU Integrada, La GPU Genera Imágenes De 512x512 Con 25 Steps Por Imágen A Un Ritmo De 29.5 Imágenes Por Hora... Y La CPU Con
    # Sus 8 Cores Del Ryzen 4800h A Pleno Rendimiento... 12 Imágenes Por Hora, Por Lo Que Estamos Hablando De 2.45X Más Rápido. No Está Mal...
    #
    # Las Líneas De Código:
    #
    # - os.mkdir('M8AX-Imágenes_Finales-M8AX')
    # - os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX')
    # - fichero='E:\M8AX-AmD_Stable_Diffusion_AmD-M8AX\M8AX-Imágenes_Finales-M8AX\Sesión --- '+creadire+'\Sesión --- '+creadire +'.TxT'
    # - os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX\\Sesión --- '+creadire)
    # - os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX\\Sesión --- '+creadire)
    #
    # - Podeis Cambiarlas A Los Directorios Que Useis Vosotros, Pero Mantened La Secuencia Con Vuestros Directorios. Además Os Recomiendo Cambiarlas Porque Sino... No Os Funcionará.
    # Creo Que Eso No Necesita Explicación Alguna... ¡¡¡ A Disfrutar !!!
    #
    # By M8AX On December 09 2022 12:00h
    #
    ##################################################################################################################################################################################################################################
  203. m8ax
    Sat, Dec 10, 2022, 01:49:56
    El Codigo continua en el siguiente comentario
  204. m8ax
    Sat, Dec 10, 2022, 01:50:18
    from os import remove
    from PIL import Image, ImageChops, ImageEnhance, ImageOps
    from diffusers import StableDiffusionOnnxPipeline
    from datetime import datetime, timedelta
    import os
    import errno
    import time
    import math
    import click
    import numpy as np
    import cv2
    import glob
    import requests
    import pandas as pd

    @click.command()
    @click.option("-p", "--prompt", required=True, type=str)
    @click.option("-w", "--width", required=False, type=int, default=512)
    @click.option("-h", "--height", required=False, type=int, default=512)
    @click.option("-st", "--steps", required=False, type=int, default=25)
    @click.option("-g", "--guidance-scale", required=False, type=float, default=7.5)
    @click.option("-s", "--seed", required=False, type=int, default=None)
    @click.option("-c", "--cuan", required=False, type=int, default=None)
    @click.option("-f", "--fps", required=False, type=float, default=None)
    @click.option("-wo", "--word", required=False, type=str)
    @click.option("-j", "--pseed", required=False, type=int, default=None)

    def run(
    prompt: str,
    word: str,
    width: int,
    pseed: int,
    height: int,
    steps: int,
    cuan: int,
    fps: int,
    guidance_scale: float,
    seed: int):

    pipe = StableDiffusionOnnxPipeline.from_pretrained(
    "./stable_diffusion_onnx",
    provider="DmlExecutionProvider"
    )

    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))
    fps= 0 if fps is None else fps
    pseed= 0 if pseed is None else pseed
    word= "" if word is None else word
    seed = np.random.randint(np.iinfo(np.int32).max) if seed is None else seed
    latents = get_latents_from_seed(seed, width, height)
    cuan = 1 if cuan is None else cuan
    creadire=time.strftime("%d_%m_%y")+'-'+time.strftime("%H_%M_%S")

    try:
    os.mkdir('M8AX-Imágenes_Finales-M8AX')
    except OSError as e:
    if e.errno != errno.EEXIST:
    raise

    xsteps=5
    xguidance_scale=5

    if steps==0:
    xsteps=1

    if guidance_scale==0:
    xguidance_scale=1

    if cuan==0:
    cuan=10

    num_images = 0;
    iniciot = time.time()


    os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX')
    fichero='E:\M8AX-AmD_Stable_Diffusion_AmD-M8AX\M8AX-Imágenes_Finales-M8AX\Sesión --- '+creadire+'\Sesión --- '+creadire +'.TxT'
    gorko=np.random.randint(999999999, 2147483647 )

    try:
    os.mkdir('Sesión --- '+creadire)
    except OSError as e:
    if e.errno != errno.EEXIST:
    raise

    os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX\\Sesión --- '+creadire)

    with open(fichero, 'a') as file:
    file.write("----------------------------\n")
    file.write("Sesión --- "+time.strftime("%d_%m_%y")+"-"+time.strftime("%H_%M_%S")+"\n")
    file.write("----------------------------\n")
    file.close()

    while (num_images < cuan):

    if word != "":
    s = requests.Session()
    r = s.get( 'https://lexica.art/api/v1/search?q='+word)
    df = pd.json_normalize(r.json()['images'])
    longi=np.random.randint(len(df))
    prompt=df.prompt[longi]
    if cuan>1:
    seed = np.random.randint(np.iinfo(np.int32).max)
    latents = get_latents_from_seed(seed, width, height)
    if (pseed==1 and cuan>1):
    seed=np.random.randint(gorko,(gorko+(cuan*3)))
    if xsteps==1:
    steps=np.random.randint(20, 65)
    if xguidance_scale==1:
    guidance_scale=math.floor(np.random.uniform(low=6.5, high=20.5, size=(1,1))*10)/10
    inicio = time.time()
    num_images=num_images+1
    if word != "":
    print(f"\nPROMPT:\n\n{prompt}")
    if word=="":
    print(f"\nPROMPT:\n\n{prompt}")
    print(f"\nUsando {seed}, Semillas Para La Generación De La Imágen...\n")
    image = pipe(prompt, height=height, width=width, num_inference_steps=steps, guidance_scale=guidance_scale, latents=latents).images[0]
    image.save("Salida.PnG")
    image = Image.open("Salida.PnG")
    image = ImageEnhance.Sharpness(image).enhance(3)
    image.save("Fin.PnG")
    image = Image.open('Fin.PnG')
    image = image.resize((2048,2048))
    os.chdir ('E:\\M8AX-AmD_Stable_Diffusion_AmD-M8AX\\M8AX-Imágenes_Finales-M8AX\\Sesión --- '+creadire)
    nombreimg=time.strftime("%d_%m_%y")+"-"+time.strftime("%H_%M_%S") +"---Res-"+str(width)+" x "+str(height)+"__Steps-"+str(steps)+"__GScale-"+str(guidance_scale)+"__Seed-"+str(seed)+"__Cuan-"+str(cuan)+"__Fps-"+str(fps)+"__Word-"+str(word)+"__FinalResNum-"+'2048 x 2048-----'+str(num_images)+'.PnG'
    image.save(time.strftime("%d_%m_%y")+"-"+time.strftime("%H_%M_%S") +"---Res-"+str(width)+" x "+str(height)+"__Steps-"+str(steps)+"__GScale-"+str(guidance_scale)+"__Seed-"+str(seed)+"__Cuan-"+str(cuan)+"__Fps-"+str(fps)+"__Word-"+str(word)+"__FinalResNum-"+'2048 x 2048-----'+str(num_images)+'.PnG','png')
    remove("Salida.PnG")
    remove("Fin.PnG")
    fin = time.time()
    en1hora=(3600*num_images)/(fin-iniciot)
    print(f"\nNombre De Fichero Resultante:\n\n{nombreimg}")
    print(f"\nFin De Generación De Imágen. Tiempo De Cálculo - {fin-inicio} Segundos...")
    print(f"\nRitmo De Generación De Imágenes Por Hora - {en1hora} Imágenes/Hora...")
    print(f"\nRitmo De Generación De Imágenes Por Min - {(60*num_images)/(fin-iniciot)} Imágenes/Min...")
    print(f"\nRitmo De Generación De Imágenes Por Seg - {(num_images)/(fin-iniciot)} Imágenes/Seg...")
    with open(fichero, 'a') as file:
    file.write("\nPROMPT:\n\n"+str(prompt)+"\n\nNombre De Fichero Resultante: "+str(nombreimg)+"\n\nFin De Generación De Imágen. Tiempo De Cálculo - "+str(fin-inicio)+" Segundos...\n\nRitmo De Generación De Imágenes Por Hora - "+str(en1hora)+" Imágenes/Hora...\n\nRitmo De Generación De Imágenes Por Min - "+str((60*num_images)/(fin-iniciot))+" Imágenes/Min...\n\n"+"Ritmo De Generación De Imágenes Por Seg - "+str((num_images)/(fin-iniciot))+" Imágenes/Seg...")
    file.write("\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n")
    file.close()

    if fps>0:
    frameSize = (2048, 2048)
    out = cv2.VideoWriter('Sesion De Video - '+str (cuan)+' Imagenes - '+str(fps)+' Fps - '+str(cuan/fps)+' Frames --- '+creadire +'.Mp4',cv2.VideoWriter_fourcc(*'x264'), fps, frameSize)
    for filename in glob.glob("*.png"):
    img = cv2.imread(filename)
    out.write(img)
    print(f"\n{filename}")
    out.release()
    print(f"\nVideo Realizado Con {cuan} Imágenes Con Un FrameRate De {fps} Fps. ( FORMATO MP4 )...")
    with open(fichero, 'a') as file:
    file.write("\nVideo Realizado Con "+str(cuan)+" Imágenes Con Un FrameRate De "+str(fps)+" Fps. ( FORMATO MP4 )...\n")
    file.close()
    fint = time.time()
    tiempot=fint-iniciot
    convertido=segahms(tiempot)
    with open(fichero, 'a') as file:
    file.write("\nFin De Generación De "+str(cuan)+" Imágenes. Tiempo Total De Cálculo Intensivo De GPU - "+str(tiempot)+" Segundos...\n\n"+"Tiempo Total De Cálculo Intensivo De GPU --- "+str(convertido)+"\n\nBy M8AX - http://youtube.com/m8ax - https://oncyber.io/m8ax\n\nPodeis Suscribiros, No Me Enfadaré... xD")
    file.write("\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------")
    file.close()
    print(f"\nFin De Generación De {cuan} Imágenes. Tiempo Total De Cálculo Intensivo De GPU - {tiempot} Segundos...\n\nTiempo Total De Cálculo Intensivo De GPU --- {convertido}\n\nBy M8AX - http://youtube.com/m8ax - https://oncyber.io/m8ax\n\nPodeis Suscribiros, No Me Enfadaré... xD")

    def get_latents_from_seed(seed: int, width: int, height:int) -> np.ndarray:
    latents_shape = (1, 4, height // 8, width // 8)
    rng = np.random.default_rng(seed)
    image_latents = rng.standard_normal(latents_shape).astype(np.float32)
    return image_latents

    def segahms(segundos):
    horas = int(segundos / 60 / 60)
    segundos -= horas*60*60
    minutos = int(segundos/60)
    segundos -= minutos*60
    return f"{horas}h:{minutos}m:{int(segundos)}s"

    if __name__ == '__main__':
    run()
  205. m8ax
    Sat, Dec 10, 2022, 01:51:10
    si alguien necesita ayuda... @mviiiax en telegram

    if anybody need help @mviiiax in telegram
  206. m8ax
    Sat, Dec 10, 2022, 18:36:46
    the folder .cache in c:\users\nameuser\.cache can i delete?
  207. m8ax
    Tue, Dec 13, 2022, 00:21:37
    https://github.com/m8ax/Programa-En-Python-Para-Manejar-Stable-Diffusion-Corriendo-En-GPU-AMD.-Inclluidas-Las-Integradas
  208. nugenart
    Wed, Dec 14, 2022, 05:39:12
    I LOVE YOU NEIL
  209. joko
    Thu, Dec 15, 2022, 06:32:45
    can you change the model type in this method?
  210. Ashen
    Sat, Dec 17, 2022, 23:12:59
    I am stuck on this:
    TypeError: StableDiffusionOnnxPipeline.__init__() got an unexpected keyword argument 'requires_safety_checker'
  211. spulg
    Mon, Dec 19, 2022, 15:49:07
    For those who get the errror of not being able to import "OnnxStableDiffusionPipeline":

    Change the line 24 to simply "from diffusers import StableDiffusionPipeline"

    and line 223 "to onnx_pipeline = StableDiffusionPipeline("

    Apprently the two functions have been merged to one.
  212. ALEXANDER
    Tue, Dec 20, 2022, 10:25:45
    I installed this and everything works good!
    I wonder which stable diffusion version do I have when following this guide?

    Here is a cool trick to make the code generate different 1000 images over night with the same text!
    from diffusers import StableDiffusionOnnxPipeline
    import numpy as np

    def get_latents_from_seed(seed: int, width: int, height:int) -> np.ndarray:
    # 1 is batch size
    latents_shape = (1, 4, height // 8, width // 8)
    # Gotta use numpy instead of torch, because torch's randn() doesn't support DML
    rng = np.random.default_rng(seed)
    image_latents = rng.standard_normal(latents_shape).astype(np.float32)
    return image_latents

    pipe = StableDiffusionOnnxPipeline.from_pretrained("./stable_diffusion_onnx", provider="DmlExecutionProvider")
    pipe.safety_checker = lambda images, **kwargs: (images, [False] * len(images))
    """
    prompt: Union[str, List[str]],
    height: Optional[int] = 512,
    width: Optional[int] = 512,
    num_inference_steps: Optional[int] = 50,
    guidance_scale: Optional[float] = 7.5, # This is also sometimes called the CFG value
    eta: Optional[float] = 0.0,
    latents: Optional[np.ndarray] = None,
    output_type: Optional[str] = "pil",
    """

    x = 0;
    seed = 66666
    # Generate our own latents so that we can provide a seed.
    prompt ="A happy celebrating robot on a mountaintop, happy, landscape, dramatic lighting, art by artgerm greg rutkowski alphonse mucha, 4k uhd"
    for x in range(1000): #1000 = number of picture variants to make
    latents = get_latents_from_seed(seed, 512, 512)
    image = pipe(prompt, num_inference_steps=25, guidance_scale=13, latents=latents).images[0]
    image.save("output_"+str(x)+".png")
    x += 1
    seed += 666





  213. ALEXANDER
    Tue, Dec 20, 2022, 10:29:43
    I get this error printed 6 times before it starts proessing, but it still works. is it easy to fix?
    "2022-12-20 11:27:13.0485271 [W:onnxruntime:, inference_session.cc:490 onnxruntime::InferenceSession::RegisterExecutionProvider] Having memory pattern enabled is not supported while using the DML Execution Provider. So disabling it for this session since it uses the DML Execution Provider."
  214. ldg
    Tue, Dec 20, 2022, 12:40:03
    How can I put a model of my face already trained in ckpt format here? to generate images with my face? in the official version it handles ckpt files but this one doesn't... how do I do it?
  215. ALEXANDER
    Fri, Dec 23, 2022, 12:37:17
    reply to ldg:
    Im trying now to do the same thing, i have the .ckpt file and have tried the convert_stable_diffusion_checkpoint_to_onnx.py but it errors "is not a valid JSON file"

  216. ALEXANDER
    Fri, Dec 23, 2022, 12:37:18
    reply to ldg:
    Im trying now to do the same thing, i have the .ckpt file and have tried the convert_stable_diffusion_checkpoint_to_onnx.py but it errors "is not a valid JSON file"

  217. Jonor
    Sun, Dec 25, 2022, 17:26:20
    Thanks for the guide, but I fumble on the last step, that is, making an image.
    Get error: " Traceback (most recent call last):
    File "C:\StableDiffusion\text2img_b.py", line 30, in
    image = pipe(prompt, num_inference_steps=25, guidance_scale=13, latents=latents).images[0]
    File "C:\StableDiffusion\virtualenv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_onnx_stable_diffusion.py", line 250, in __call__
    latents = latents * np.float(self.scheduler.init_noise_sigma)
    File "C:\StableDiffusion\virtualenv\lib\site-packages\numpy\__init__.py", line 284, in __getattr__
    raise AttributeError("module {!r} has no attribute "
    AttributeError: module 'numpy' has no attribute 'float' "
  218. Jan
    Mon, Dec 26, 2022, 19:59:34
    numpy float. Probably Use np.single, see https://numpy.org/doc/stable/user/basics.types.html
  219. giorgio
    Sun, Jan 01, 2023, 11:22:13
    brutal script for this in https://github.com/m8ax/Programa-En-Python-Para-Manejar-Stable-Diffusion-Corriendo-En-GPU-AMD.-Incluidas-Las-Integradas

    its brutal
  220. Piper
    Sun, Jan 08, 2023, 00:39:41
    When it gets to the point where I need to put the Huggingface token, both CMD and PowerShell simply won't let me type or paste into the editor.
    Ctrl C won't work, and right-clicking won't as well. I can't even type the token manually.
    If I type enter, then it asks me if I want to include the token (which I haven't provided) as git credentials. Enter is the only input I can do at that moment.
  221. Sup Piper
    Thu, Jan 12, 2023, 03:50:16
    Sup Piper, try right clicking the Title Bar of the terminal window. There you should see Edit - > Paste | If Paste is ever greyed out, then you usually just need to type some text into the CMD/Powershell terminal window, and delete the text. That'll make your cursor active at the correct point in the terminal window. Then you should be able to easily click the Title Bar - > Edit - > Paste | huggingface-cli login does not show your output, so you literally just paste via the instructions I gave, then press enter. Your token will be taken this way.
  222. cuchu
    Tue, Jan 17, 2023, 17:19:55
    ModuleNotFoundError: No module named 'onnx'
    after executing the script for converting , checked synthax
    help ?
  223. cuchu
    Tue, Jan 17, 2023, 17:28:51
    wrong token generated on huggin :s thanks. I made a token for write instead or read.(for those who would have same problem)
  224. cuchu
    Tue, Jan 17, 2023, 17:31:16
    oups still stuck
  225. Brando
    Wed, Jan 18, 2023, 00:55:02
    Hey wanted to thank you for putting this together but its not working for me

    (virtualenv) C:\Users\Brandon\Desktop\stable-diffusion\virtualenv>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\Brandon\Desktop\stable-diffusion\virtualenv\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    Please help I have been trying 4 or 5 different tutorials to get this to work and NONE of them have been smooth I've been getting errors that nobody else seems to get thanks in advance
  226. Brando
    Wed, Jan 18, 2023, 01:01:31
    okay so if you're getting that error just pip install onnx should work

    However now I am getting this error

    (virtualenv) C:\Users\Brandon\Desktop\stable-diffusion\virtualenv>python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Users\Brandon\Desktop\stable-diffusion\virtualenv\convert_stable_diffusion_checkpoint_to_onnx.py", line 24, in
    from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, StableDiffusionPipeline
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers' (C:\Users\Brandon\Desktop\stable-diffusion\virtualenv\lib\site-packages\diffusers\__init__.py)

    (virtualenv) C:\Users\Brandon\Desktop\stable-diffusion\virtualenv>
  227. gorko
    Sat, Jan 21, 2023, 20:30:50
    im dont know how convert ckpt models too onnx i try all and nothng
  228. KrkaD
    Fri, Jan 27, 2023, 17:42:59
    I am getting the same errors:

    File "E:\SD AMD WIN\OnnxDiffusersUI-main\convert_stable_diffusion_checkpoint_to_onnx.py", line 24, in
    from diffusers import OnnxStableDiffusionPipeline, StableDiffusionPipeline
    ModuleNotFoundError: No module named 'diffusers'pip install diffuser

    So I did "pip install diffuser"

    And tried again. And it asked for another install and I tried to install that.

    E:\SD AMD WIN\OnnxDiffusersUI-main>python convert_stable_diffusion_checkpoint_to_onnx.py --"E:\SD AMD WIN\OnnxDiffusersUI-main\dndMapGenerator_v2Gridless.ckpt" --"E:\SD AMD WIN\OnnxDiffusersUI-main\dndmaps"
    Traceback (most recent call last):
    File "E:\SD AMD WIN\OnnxDiffusersUI-main\convert_stable_diffusion_checkpoint_to_onnx.py", line 25, in
    from diffusers.onnx_utils import OnnxRuntimeModel
    ModuleNotFoundError: No module named 'diffusers.onnx_utils'

    E:\SD AMD WIN\OnnxDiffusersUI-main>pip install diffusers.onnx_utils
    ERROR: Could not find a version that satisfies the requirement diffusers.onnx_utils (from versions: none)
    ERROR: No matching distribution found for diffusers.onnx_utils
  229. Frank
    Tue, Jan 31, 2023, 12:04:18
    Im stack before the download, do you have any idea what I can do?

    (virtualenv) PS C:\ai2> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\ai2\convert_stable_diffusion_checkpoint_to_onnx.py", line 23, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'
  230. Phil
    Sat, Feb 04, 2023, 05:49:43
    Hey, I had some issues getting this to work, so just in case anyone else also has the same problems...

    When following this step:

    pip install diffusers==0.3.0
    pip install transformers
    pip install onnxruntime

    Make sure to also run this:
    pip install onnx

    THEN after running the huggingface token:

    huggingface-cli.exe login

    You need to run:

    pip uninstall diffusers
    pip install diffusers

    THEN you can proceed to:

    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"

    And that should start the download.

    Hope that helps!
  231. Phil
    Sat, Feb 04, 2023, 05:51:18
    In case that wasnt clear above - you still need to paste the token in after

    huggingface-cli.exe login

    and before

    pip uninstall diffusers
    pip install diffusers
  232. Dirk
    Sat, Feb 04, 2023, 20:56:28
    Hey,
    i have an issue with the final installation.
    It stops with the error message "TypeError: StableDiffusionOnnxPipeline.__init__() got an unexpected keyword argument 'requires_safety_checker'"
    Its running on Windows 10 with AMD CPU and GPU. The powershell is running in admin mode.
    Can someone support please?
  233. Django-Mu
    Tue, Mar 07, 2023, 18:17:26
    After this command:
    "python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="E:\stable-diffusion"

    The downloading starts and few gets finished/downloaded but then I get this attribute name be string error, see below please:
    "Traceback (most recent call last):
    File "E:\stable-diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 265, in
    convert_models(args.model_path, args.output_path, args.opset, args.fp16)
    File "E:\stable-diffusion\virtualenv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
    File "E:\stable-diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 79, in convert_models
    pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
    File "E:\stable-diffusion\virtualenv\lib\site-packages\diffusers\pipeline_utils.py", line 373, in from_pretrained
    load_method = getattr(class_obj, load_method_name)
    TypeError: getattr(): attribute name must be string"
  234. Luxion
    Sat, Apr 01, 2023, 18:54:24
    This guide has been outdated for several months and there has been a better alternative for a while:

    https://github.com/lshqqytiger/stable-diffusion-webui-directml

    Install that instead. Its essentially a ported version of famous Automatic1111 UI to work with DirectML which is compatible with most AMD cards - which means that every feature is working: Img2img/Inpaint, LoRAs, ControlNet, Upscalers, most extensions such as Dynamic Prompts, 3D open pose editor etc, etc...

    Read the instructions on the github page. You should add these args to webui-user.bat like so:

    set COMMANDLINE_ARGS=--precision full --no-half --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1

    If your GPU has LESS than 10Gb VRAM you might want to add this arg as well: --medvram

    If your GPU has LESS than 6Gb VRAM add this instead: --lowvram

    If your GPU has at least 4Gb, you followed the instructions correctly and still get 'not enough memory' kind of errors - then try it without the '--opt-sub-quad-attention' arg.

    If you have troubles with SD failing to launch offline, add these: --skip-install --skip-version-check

    Good luck, and enjoy your local free image generator!
  235. groschenopa
    Sat, Apr 29, 2023, 13:46:15
    When I try to run the onnx-script, I only get an error that ends in this:

    File "C:\stable-diffusion\virtualenv\lib\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph
    graph = _C._jit_pass_onnx(graph, operator_export_type)
    File "C:\stable-diffusion\virtualenv\lib\site-packages\torch\onnx\utils.py", line 1901, in _run_symbolic_function
    raise errors.UnsupportedOperatorError(
    torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.

    I already tried updating pytorch, but it did not fix anything. Is there anything new to do for the current versions of SD and every corresponding utils? Thanks in advance, great guide but apparently overtaken by time. ;-)
  236. Name
    Sun, Apr 30, 2023, 22:42:04
    Worked for me up tp and until i got to the utility script

    (virtualenv) PS C:\Diffusion\Stable-Diffusion> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Diffusion\Stable-Diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    But earlier the ONNX seemed to install fine.

    (virtualenv) PS C:\Diffusion\Stable-Diffusion> pip install C:\Diffusion\Stable-Diffusion\virtualenv\ort_nightly_directml-1.15.0.dev20230429003-cp311-cp311-win_amd64.whl --force-reinstall
    Processing c:\diffusion\stable-diffusion\virtualenv\ort_nightly_directml-1.15.0.dev20230429003-cp311-cp311-win_amd64.whlCollecting coloredlogs (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
    Collecting flatbuffers (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached flatbuffers-23.3.3-py2.py3-none-any.whl (26 kB)
    Collecting numpy>=1.24.2 (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached numpy-1.24.3-cp311-cp311-win_amd64.whl (14.8 MB)
    Collecting packaging (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached packaging-23.1-py3-none-any.whl (48 kB)
    Collecting protobuf (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached protobuf-4.22.3-cp310-abi3-win_amd64.whl (420 kB)
    Collecting sympy (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached sympy-1.11.1-py3-none-any.whl (6.5 MB)
    Collecting humanfriendly>=9.1 (from coloredlogs->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
    Collecting mpmath>=0.19 (from sympy->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
    Collecting pyreadline3 (from humanfriendly>=9.1->coloredlogs->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached pyreadline3-3.4.1-py3-none-any.whl (95 kB)
    Installing collected packages: pyreadline3, mpmath, flatbuffers, sympy, protobuf, packaging, numpy, humanfriendly, coloredlogs, ort-nightly-directml
    Attempting uninstall: mpmath
    Found existing installation: mpmath 1.3.0
    Uninstalling mpmath-1.3.0:
    Successfully uninstalled mpmath-1.3.0
    Attempting uninstall: sympy
    Found existing installation: sympy 1.11.1
    Uninstalling sympy-1.11.1:
    Successfully uninstalled sympy-1.11.1
    Attempting uninstall: packaging
    Found existing installation: packaging 23.1
    Uninstalling packaging-23.1:
    Successfully uninstalled packaging-23.1
    Attempting uninstall: numpy
    Found existing installation: numpy 1.24.3
    Uninstalling numpy-1.24.3:
    Successfully uninstalled numpy-1.24.3
    Successfully installed coloredlogs-15.0.1 flatbuffers-23.3.3 humanfriendly-10.0 mpmath-1.3.0 numpy-1.24.3 ort-nightly-directml-1.15.0.dev20230429003 packaging-23.1 protobuf-4.22.3 pyreadline3-3.4.1 sympy-1.11.1
  237. Name
    Sun, Apr 30, 2023, 22:42:09
    Worked for me up tp and until i got to the utility script

    (virtualenv) PS C:\Diffusion\Stable-Diffusion> python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    Traceback (most recent call last):
    File "C:\Diffusion\Stable-Diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    But earlier the ONNX seemed to install fine.

    (virtualenv) PS C:\Diffusion\Stable-Diffusion> pip install C:\Diffusion\Stable-Diffusion\virtualenv\ort_nightly_directml-1.15.0.dev20230429003-cp311-cp311-win_amd64.whl --force-reinstall
    Processing c:\diffusion\stable-diffusion\virtualenv\ort_nightly_directml-1.15.0.dev20230429003-cp311-cp311-win_amd64.whlCollecting coloredlogs (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
    Collecting flatbuffers (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached flatbuffers-23.3.3-py2.py3-none-any.whl (26 kB)
    Collecting numpy>=1.24.2 (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached numpy-1.24.3-cp311-cp311-win_amd64.whl (14.8 MB)
    Collecting packaging (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached packaging-23.1-py3-none-any.whl (48 kB)
    Collecting protobuf (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached protobuf-4.22.3-cp310-abi3-win_amd64.whl (420 kB)
    Collecting sympy (from ort-nightly-directml==1.15.0.dev20230429003)
    Using cached sympy-1.11.1-py3-none-any.whl (6.5 MB)
    Collecting humanfriendly>=9.1 (from coloredlogs->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
    Collecting mpmath>=0.19 (from sympy->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
    Collecting pyreadline3 (from humanfriendly>=9.1->coloredlogs->ort-nightly-directml==1.15.0.dev20230429003)
    Using cached pyreadline3-3.4.1-py3-none-any.whl (95 kB)
    Installing collected packages: pyreadline3, mpmath, flatbuffers, sympy, protobuf, packaging, numpy, humanfriendly, coloredlogs, ort-nightly-directml
    Attempting uninstall: mpmath
    Found existing installation: mpmath 1.3.0
    Uninstalling mpmath-1.3.0:
    Successfully uninstalled mpmath-1.3.0
    Attempting uninstall: sympy
    Found existing installation: sympy 1.11.1
    Uninstalling sympy-1.11.1:
    Successfully uninstalled sympy-1.11.1
    Attempting uninstall: packaging
    Found existing installation: packaging 23.1
    Uninstalling packaging-23.1:
    Successfully uninstalled packaging-23.1
    Attempting uninstall: numpy
    Found existing installation: numpy 1.24.3
    Uninstalling numpy-1.24.3:
    Successfully uninstalled numpy-1.24.3
    Successfully installed coloredlogs-15.0.1 flatbuffers-23.3.3 humanfriendly-10.0 mpmath-1.3.0 numpy-1.24.3 ort-nightly-directml-1.15.0.dev20230429003 packaging-23.1 protobuf-4.22.3 pyreadline3-3.4.1 sympy-1.11.1
  238. Chris
    Mon, May 01, 2023, 07:05:11
    Hi! i had a little bit Trouble with the HuggingFace Token. the Solutions here helped me. But now, when I try to log in to Hugging Face-cli.exe my Windows says "Access Denied" :(
    I clicked 3 times N after Pasting the Key cause i didnt saw if/what was pasted. can be that the problem? how can i fix that? :(
  239. Mark
    Sun, Aug 13, 2023, 09:11:28
    when i run
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"

    i get this error
    Traceback (most recent call last):
    File "G:\stable-diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 20, in
    import onnx
    ModuleNotFoundError: No module named 'onnx'

    Not sure where i am going wrong.. any advice would be appreciated!
  240. Erijai
    Thu, Aug 31, 2023, 19:40:25
    When im in the last step: "python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"" the download its fine but the program gives me this problem:
    Fetching 33 files: 100%|███████████████████████████████████████████████████████████| 33/33 [00:00<00:00, 16487.44it/s]
    Traceback (most recent call last):
    File "C:\Users\erijai\Documents\stable-diffusion-gpu\convert_stable_diffusion_checkpoint_to_onnx.py", line 265, in
    convert_models(args.model_path, args.output_path, args.opset, args.fp16)
    File "C:\Users\erijai\Documents\stable-diffusion-gpu\virtualenv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\erijai\Documents\stable-diffusion-gpu\convert_stable_diffusion_checkpoint_to_onnx.py", line 79, in convert_models
    pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\erijai\Documents\stable-diffusion-gpu\virtualenv\Lib\site-packages\diffusers\pipeline_utils.py", line 373, in from_pretrained
    load_method = getattr(class_obj, load_method_name)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    TypeError: attribute name must be string, not 'NoneType'

    How can i fix it?
  241. Konstantin
    Sat, Oct 14, 2023, 07:53:01
    When i type
    python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="CompVis/stable-diffusion-v1-4" --output_path="./stable_diffusion_onnx"
    it says:
    Traceback (most recent call last):
    File "D:\Stable-diffusion\convert_stable_diffusion_checkpoint_to_onnx.py", line 25, in
    from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, StableDiffusionPipeline
    ImportError: cannot import name 'OnnxStableDiffusionPipeline' from 'diffusers' (D:\Stable-diffusion\virtualenv\lib\site-packages\diffusers\__init__.py)
    How do i solve this?
  242. Konstantin
    Sat, Oct 14, 2023, 07:59:13
    Konstantin,
    nvm i think i fixed it