Skip to content

The confusion about the usage of MMGP #9

@lixiangcog

Description

@lixiangcog
from mmgp import offload
# model = Inference.load_state_dict(args, model, pretrained_model_path)
offload.load_model_data(model, pretrained_model_path, pinToMemory = pinToMemory, partialPinning = partialPinning)

from mmgp import offload, profile_type 
pipe = hunyuan_video_sampler.pipeline
offload.profile(pipe, profile_no= profile_type.HighRAM_LowVRAM_Fast)    

If I want to leverage MMGP to reduce the load on CPU and RAM with multiple cards, which approach should I use?

@deepbeepmeep

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions