Why can I get the result when using comfyui but I only get a black background when using huggingface program?

#1
by luyangliu - opened

Why can I get the result when using comfyui but I only get a black background when using huggingface program?

This is huggingface program

from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image")
pipe.load_lora_weights("Raelina/Raena-Qwen-Image")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Sorry for the late response. I’ve uploaded the fix version for use in the diffuser. Please make sure to load the updated version in your pipeline, and let me know if you still encounter any issues. Thank you

Can you tell me why? I used pipe.components of pipe to check the model layor and it was correct. Why does using diffuser give me the wrong result?

The model layer block is correct, but the prefix layer name isn’t. I’ve converted it to use transformer instead of diffusion_model. The fixed version should work properly, I’ve tested it on a public space. For example, you can check this space Qwen-Image-Lora-Explorer , and simply paste this URL Raena-Qwen-Lora-Fix into the custom LoRA path.

I'm very sorry to bother you again. The main reason is that I have been working on your previous model for a long time but couldn't get it to work. I was very confused and didn't know the reason. I also tried to change the prefix and found that it was not a problem with the prefix.

model_data = load_file(lora_path+"/raena_qwen_image_lora_v0.1.safetensors")
new_model_data = {}
for key, value in model_data.items():
    new_key = key.replace('diffusion_model.', 'transformer.', 1)
    print(new_key)
    new_model_data[new_key] = value
new_model_path = 'modified_raena_qwen_image_lora_v0.1.safetensors'
save_file(new_model_data, new_model_path)
pipe.load_lora_weights(lora_path)

I’m not sure why you’re still focused on the previous version, since I’ve already provided a fixed one. I don’t know much about diffuser myself, so I’m not entirely sure what you mean. To fix it, I used a Lora conversion tool from here Musubi-Tuner

I'm so sorry, I just want to research and give it a try.

It’s okay, no problem. Just follow the instructions and the argument link I provided to fix it, and make sure to add "transformer" in the --diffusers_prefix

I'm sorry, I'm getting a black background image when using both LoRas. I really don't know why. In fact, when I use Python, half of the LoRas only have black background images, but they all work in ComfyUI. I looked at Python and it seems to handle different prefixes. Although the ones I can't run are all using the add_k_proj text layer in mmdit, some of them have done prefix-tuning. I've been researching for a while and really can't figure out why. I think forget it, maybe there's something wrong with the Python code and I need to add some conditions.

Sign up or log in to comment