-
Sorry for the noob question. I have been researching this for hours now but unable to make much progress. I am trying to load this checkpoint merge on CivitAI as a Huggingface Model: https://civitai.com/models/989221/illustration-juaner-ghibli-style-2d-illustration-model-flux Can you please give me some pointers on how to do this? I tried the following steps but received errors:
Command: python scripts/convert_flux_to_diffusers.py --checkpoint_path "/ IllustrationJuanerGhibli_v20.safetensors" --output_path "/Diffusers_IllustrationJuanerGhibli_v20" --transformer Command: scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "/IllustrationJuanerGhibli_v20.safetensors" --dump_path "/Diffusers_IllustrationJuanerGhibli_v20" --from_safetensors --device cuda
Code:
Error: model_index.json: 0%| | 0.00/536 [00:00<?, ?B/s] scheduler/scheduler_config.json: 0%| | 0.00/273 [00:00<?, ?B/s] text_encoder/config.json: 0%| | 0.00/613 [00:00<?, ?B/s] text_encoder_2/config.json: 0%| | 0.00/782 [00:00<?, ?B/s] (…)t_encoder_2/model.safetensors.index.json: 0%| | 0.00/19.9k [00:00<?, ?B/s] tokenizer/merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s] tokenizer/special_tokens_map.json: 0%| | 0.00/588 [00:00<?, ?B/s] tokenizer/tokenizer_config.json: 0%| | 0.00/705 [00:00<?, ?B/s] tokenizer/vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s] tokenizer_2/special_tokens_map.json: 0%| | 0.00/2.54k [00:00<?, ?B/s] spiece.model: 0%| | 0.00/792k [00:00<?, ?B/s] tokenizer_2/tokenizer.json: 0%| | 0.00/2.42M [00:00<?, ?B/s] tokenizer_2/tokenizer_config.json: 0%| | 0.00/20.8k [00:00<?, ?B/s] transformer/config.json: 0%| | 0.00/378 [00:00<?, ?B/s] (…)ion_pytorch_model.safetensors.index.json: 0%| | 0.00/121k [00:00<?, ?B/s] vae/config.json: 0%| | 0.00/820 [00:00<?, ?B/s] Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s] |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
It seems that you tried to load a model which is not a SDXL model with StableDiffusionSDXLPipeline. Actually, the model you want to load is a flux transformer model which is not a full flux model. It misses T5 Encoder and other components. You can load it via FluxPipeline with T5 and other components. |
Beta Was this translation helpful? Give feedback.
It seems that you tried to load a model which is not a SDXL model with StableDiffusionSDXLPipeline.
Actually, the model you want to load is a flux transformer model which is not a full flux model. It misses T5 Encoder and other components.
You can load it via FluxPipeline with T5 and other components.