给予FLUX更好的控制:FLUX.1-dev-ControlNet-Union-Pro-2.0
Shakker Labs FLUX.1-dev-ControlNet-Union-Pro-2.0
一、模型概述
Shakker Labs发布的FLUX.1-dev-ControlNet-Union-Pro-2.0是一个统一的ControlNet模型,专为FLUX.1-dev模型设计。该模型在前一版本基础上进行了多项改进,包括移除模式嵌入以减小模型尺寸,同时在Canny边缘检测和姿态控制方面实现了更好的控制效果和美学表现。此外,该模型新增了对软边缘的支持,但移除了对平铺(tile)模式的支持。
二、模型架构与训练
该ControlNet模型由6个双块组成,没有单块结构。训练过程从零开始,共进行了300,000步训练,使用了包含2000万张高质量通用和人物图像的数据集。训练参数包括:
-
分辨率:512x512
-
数据类型:BFloat16
-
批量大小:128
-
学习率:2e-5
-
引导参数:从[1,7]均匀采样
-
文本丢弃比率:0.20
三、支持的控制模式
该模型支持多种控制模式,包括:
-
Canny边缘检测:使用cv2.Canny实现,推荐控制条件缩放比例为0.7,控制引导结束点为0.8
-
软边缘:使用AnylineDetector实现,推荐控制条件缩放比例为0.7,控制引导结束点为0.8
-
深度:使用depth-anything实现,推荐控制条件缩放比例为0.8,控制引导结束点为0.8
-
姿态:使用DWPose实现,推荐控制条件缩放比例为0.9,控制引导结束点为0.65
-
灰度:使用cv2.cvtColor实现,推荐控制条件缩放比例为0.9,控制引导结束点为0.8
四、使用示例
论文提供了两种使用示例:单条件推理和多条件推理。
单条件推理示例
import torchfrom diffusers.utils import load_imagefrom diffusers import FluxControlNetPipeline, FluxControlNetModelbase_model = 'black-forest-labs/FLUX.1-dev'controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0'controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.float16)pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet, torch_dtype=torch.float16)pipe.to("cuda")control_image = load_image("./conds/canny.png")width, height = control_image.sizeprompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing"image = pipe(prompt,control_image=control_image,width=width,height=height,controlnet_conditioning_scale=0.7,control_guidance_end=0.8,num_inference_steps=30,guidance_scale=3.5,generator=torch.Generator(device="cuda").manual_seed(42),).images[0]
多条件推理示例
import torch
from diffusers.utils import load_imagebase_model = 'black-forest-labs/FLUX.1-dev'
controlnet_model_union = 'Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro-2.0'controlnet = FluxControlNetModel.from_pretrained(controlnet_model_union, torch_dtype=torch.float16)
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=[controlnet], torch_dtype=torch.float16)
pipe.to("cuda")control_image = load_image("./conds/canny.png")
width, height = control_image.size
prompt = "A young girl stands gracefully at the edge of a serene beach, her long, flowing"image = pipe(prompt,control_image=[control_image, control_image],width=width,height=height,controlnet_conditioning_scale=[0.35, 0.35],control_guidance_end=[0.8, 0.8],num_inference_steps=30,guidance_scale=3.5,generator=torch.Generator(device="cuda").manual_seed(42),
).images[0]
五、相关资源与致谢
文章提到该模型由Shakker Labs开发,原始想法受到xinsir/controlnet-union-sdxl-1.0的启发。所有版权均保留。
此外,论文还提到了其他相关模型资源,包括:
-
InstantX/FLUX.1-dev-IP-Adapter
-
InstantX/FLUX.1-dev-Controlnet-Canny
-
Shakker-Labs/FLUX.1-dev-ControlNet-Depth
-
Shakker-Labs/FLUX.1-dev-ControlNet-Union-Pro