-
Notifications
You must be signed in to change notification settings - Fork 6.2k
Open
Description
I tried to load Flux_Turbo_Alpha
into FluxKontextPipeline by this code, and save_pretrained to local.
import torch
from diffusers import FluxKontextPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Kontext-dev", torch_dtype=torch.bfloat16
)
adapter_id = "alimama-creative/FLUX.1-Turbo-Alpha"
pipe.load_lora_weights(adapter_id,adapter_name="flux-Turbo-Alpha")
pipe.fuse_lora()
print("Start to save pipeline")
pipe.unfuse_lora()
pipe.save_pretrained("./kontext_turbo",max_shard_size="50GB")
print("Pipeline saved")
And then, I loaded it again.
import torch
from diffusers import FluxKontextPipeline, FluxPipeline
from diffusers.utils import load_image
pipe = FluxKontextPipeline.from_pretrained(
"./kontext_turbo", torch_dtype=torch.bfloat16
)
pipe.to("cuda")
ref_url = "https://github.com/ZenAI-Vietnam/Flux-Kontext-pipelines/blob/main/assets/dinosaur_input.jpeg?raw=true"
image = load_image(ref_url).convert("RGB")
prompt = "Change yellow dinosaur to red"
image = pipe(
image=image,
prompt=prompt,
guidance_scale=2.5,
num_inference_steps=10,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("kontext_save_local.png")
But there was a bug like this:
Traceback (most recent call last):
File "/workspace/Speedup_Kontext/pipeline_experiment/kontext_checkpoint/local.py", line 10, in <module>
pipe.to("cuda")
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py", line 541, in to
module.to(device, dtype)
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/diffusers/models/modeling_utils.py", line 1384, in to
return super().to(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1355, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 915, in _apply
module._apply(fn)
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 942, in _apply
param_applied = fn(param)
^^^^^^^^^
File "/workspace/miniconda3/envs/kontext/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1348, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
How to fix this bug? Thanks a lot.
Gforky
Metadata
Metadata
Assignees
Labels
No labels