Skip to content

Speedup model loading by 4-5x ⚡ #11904

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Jul 11, 2025
Merged

Speedup model loading by 4-5x ⚡ #11904

merged 17 commits into from
Jul 11, 2025

Conversation

a-r-r-o-w
Copy link
Member

@a-r-r-o-w a-r-r-o-w commented Jul 10, 2025

All thanks to @Cyrilvallez's PR: huggingface/transformers#36380

The accelerate PR is required because we end up calling clear_device_cache in a loop (over the sharded files). This is bad. Without this, you'll see no speedup.

Another small optimization is using non_blocking everywhere and syncing just before returning control to the user. This is slightly faster.

import time
t_ini = time.time()

import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
print(f"import time: {time.time() - t_ini:.3f}s")

model_id = "black-forest-labs/FLUX.1-dev"

t0 = time.time()
torch.cuda.synchronize()
print(f"CUDA sync time: {time.time() - t0:.3f}s")

print("starting model load")
t1 = time.time()
transformer = FluxTransformer2DModel.from_pretrained(model_id, subfolder="transformer", torch_dtype=torch.bfloat16, device_map="cuda")
torch.cuda.synchronize()
t2 = time.time()

diff = t2 - t1
print(f"time: {diff:.3f}s")

pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16)
pipe.text_encoder.to("cuda")
pipe.text_encoder_2.to("cuda")
pipe.vae.to("cuda")
prompt = "A cat holding a sign that says hello world"
image = pipe(prompt, num_inference_steps=28, guidance_scale=4.0).images[0]
image.save("flux.png")

Sister PR in accelerate required to obtain speedup: huggingface/accelerate#3674

  • On main: 16.765s
  • On this branch: 4.521s

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for quickly getting this up 🔥

My comments are mostly minor, the major one being adding hf_quantizer to the allocator function.

Additionally, for a potentially better user-experience, if we could try to rethink the to() method of DiffusionPipeline, it would be helpful. I mean the following.

Currently, from what I understand, we have to first initialize the denoiser using device_map and then the rest of the components. If a user is calling .to() on a DiffusionPipeline, we could consider using device_map="cuda" for dispatching the model-level components to CUDA. I don't immediately see a downside to it.

@@ -520,3 +526,64 @@ def load_gguf_checkpoint(gguf_checkpoint_path, return_tensors=False):
parsed_parameters[name] = GGUFParameter(weights, quant_type=quant_type) if is_gguf_quant else weights

return parsed_parameters


def _find_mismatched_keys(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Taken out of here:

def _find_mismatched_keys(

if device_type is None:
device_type = get_device()
device_mod = getattr(torch, device_type, torch.cuda)
device_mod.synchronize()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess all different backends ought to have this method. Just flagging.

Copy link
Member Author

@a-r-r-o-w a-r-r-o-w Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

afaik, synchronize should be available on all devices. Just the empty_cache function required a special check because it would fail if device was cpu

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ship

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this ! Just a nit

@a-r-r-o-w a-r-r-o-w merged commit c903527 into main Jul 11, 2025
32 checks passed
@a-r-r-o-w a-r-r-o-w deleted the speedup-model-loading branch July 11, 2025 16:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants