-
Notifications
You must be signed in to change notification settings - Fork 6.4k
HunyuanImage21 #12333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
HunyuanImage21 #12333
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍🏽
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks quite ready! My comments are mostly minor apart from some suggestions on potentially reducing some code (definitely not merge-blocking).
Let's also add tests and a doc page entry 👀
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage_refiner.py
Outdated
Show resolved
Hide resolved
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage.py
Outdated
Show resolved
Hide resolved
return h | ||
|
||
|
||
class AutoencoderKLHunyuanImage(ModelMixin, ConfigMixin, FromOriginalModelMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order for FromOriginalModelMixin
to work properly do we not have to add a mapping function in the single_utils.py
? Cc: @DN6
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah can remove if single file support for this isn't needed. Or we add in a follow up if it is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering how big the model is I would imagine GGUF support would be a reason to support single file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can do your own GGUFs out of diffusers checkpoints:
https://huggingface.co/docs/diffusers/main/en/quantization/gguf#convert-to-gguf
src/diffusers/models/autoencoders/autoencoder_kl_hunyuanimage_refiner.py
Outdated
Show resolved
Hide resolved
return hidden_states | ||
|
||
|
||
class AutoencoderKLHunyuanImageRefiner(ModelMixin, ConfigMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't compared too deeply but is there a chance we can fold the other VAE class implementation and this one into a single combined class? Or are the changes too many for that? Regardless, it's definitely not something merge-blocking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one is 2d, one is 3d (I think maybe fine-tuned from hunyuan video , similar to qwen image & wan situation)
return hidden_states, encoder_hidden_states | ||
|
||
|
||
class HunyuanImageTransformer2DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOriginalModelMixin, CacheMixin): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we subclass from AttentionMixin
, I think utilities like attn_processors
will become available automatically and we won't have to implement them here.:
AttentionMixin, |
Cc: @DN6
hidden_size = num_attention_heads * attention_head_dim | ||
mlp_dim = int(hidden_size * mlp_ratio) | ||
|
||
self.attn = Attention( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a merge blocker but we could consider doing HunyuanImageAttention
like:
class FluxAttention(torch.nn.Module, AttentionModuleMixin): |
Happy to open a PR myself as a followup.
>>> pipe = HunyuanImagePipeline.from_pretrained( | ||
... "hunyuanvideo-community/HunyuanVideo", torch_dtype=torch.bfloat16 | ||
... ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be updated? 👀
hunyuan21, this branch seems was not merged sucessfully, is there any action for next step? |
@sayakpaul @yiyixuxu gentle ping as this pr is close-to-merge for past 3 weeks? |
@kk3dmax @vladmandic |
…refiner.py Co-authored-by: Sayak Paul <[email protected]>
fix #12321
HuyuanImage2-1
HuyuanImage2.1-Distilled
HunyuanImage-2.1-Refiner