FLUX.1 Fill [dev] - AI Image Models Tool
Overview
FLUX.1 Fill [dev] is a 12‑billion-parameter rectified-flow transformer from Black Forest Labs focused on text-guided inpainting and outpainting. The checkpoint is released as a “dev” (non-commercial) variant with open weights and reference inference code; it is designed to fill or extend specific regions of an existing image based on a textual prompt and a binary mask, while preserving lighting, composition and structural cues from the source image. ([huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev?utm_source=openai)) The model is guidance-distilled for faster, more efficient sampling than many large generative models, and integrates directly with the Hugging Face Diffusers ecosystem via a FluxFillPipeline implementation (examples show torch bfloat16 usage and a max_sequence_length parameter). Black Forest Labs also publish a developer benchmark claiming FLUX.1 Fill [pro] leads in inpainting quality and that FLUX.1 Fill [dev] ranks second, while the toolset supports structural conditioning (Canny/Depth) and outpainting workflows. Community feedback shows strong praise for edit quality and prompt adherence, alongside typical toughness cases (mask edge artifacts, continuity during large outpaints) discussed in issues and forums. ([huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev?utm_source=openai))
Model Statistics
- Downloads: 122,907
- Likes: 980
License: other
Model Details
Architecture and size: FLUX.1 Fill [dev] is described by Black Forest Labs as a 12B-parameter rectified flow transformer operating in a latent/flow-matching paradigm. The approach combines flow-matching (rectified flow) training with transformer blocks to map noise-to-image trajectories efficiently. ([huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev?utm_source=openai)) Training & efficiency: The dev checkpoint is guidance-distilled to reduce sampling costs and latency compared with full pro variants, enabling fewer sampling steps and more practical local experimentation. Structural-conditioning adapters (FLUX.1 Canny and FLUX.1 Depth) let users preserve edges or depth structure when editing. The model supports both inpainting (masked-area replacement) and outpainting (image extension). ([bfl.ai](https://bfl.ai/flux-1-tools/?utm_source=openai)) Integration & runtime: Black Forest Labs provide a Diffusers FluxFillPipeline (example uses torch bfloat16 and max_sequence_length=512). The weights and a reference implementation are available on Hugging Face, and the models are also accessible through the BFL API and several partner inference providers. Community repositories and ComfyUI/Replicate wrappers exist for local or hosted use. Typical usage parameters shown in examples include guidance_scale, num_inference_steps, generator seed and mask_image inputs. ([huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev?utm_source=openai))
Key Features
- Text-guided inpainting: replace masked areas using natural-language prompts.
- Outpainting: extend images beyond borders while matching scene lighting and composition.
- 12B rectified-flow transformer: large latent flow-matching architecture for high-fidelity edits.
- Guidance distillation: optimized for fewer sampling steps and faster inference.
- Diffusers FluxFillPipeline: direct integration with Hugging Face diffusers for local inference.
- Structural conditioning: supports Canny and Depth maps to preserve image structure.
Example Usage
Example (python):
import torch
from diffusers import FluxFillPipeline
from diffusers.utils import load_image
# load images (example uses HF-hosted example images)
image = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup.png")
mask = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup_mask.png")
# instantiate pipeline (example recommended dtype: bfloat16)
pipe = FluxFillPipeline.from_pretrained(
"black-forest-labs/FLUX.1-Fill-dev",
torch_dtype=torch.bfloat16,
).to("cuda")
out = pipe(
prompt="a white paper cup",
image=image,
mask_image=mask,
height=1632,
width=1232,
guidance_scale=30,
num_inference_steps=50,
max_sequence_length=512,
generator=torch.Generator("cpu").manual_seed(0),
).images[0]
out.save("flux-fill-dev.png")
# Note: agree to the model's Flux Dev Non-Commercial License before downloading and using weights.
Benchmarks
Parameters: 12 billion (Source: https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev (model page))
Developer benchmark (inpainting rank): BFL benchmark: Fill [pro] ranked #1, Fill [dev] ranked #2 among evaluated inpainting models (Source: https://bfl.ai/flux-1-tools/ (Black Forest Labs benchmark statement))
Diffusers pipeline support: FluxFillPipeline example, supports torch bfloat16 and mask_image inputs (Source: https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev (usage example))
Hugging Face downloads (recent activity shown on model page): Downloads (monthly activity shown on HF model page) (Source: https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev (model stats and activity))
Common failure modes (community): Mask-edge lines, large-area continuity / outpaint coherence reported in community threads (Source: Hugging Face discussions, GitHub issues, Reddit posts (community feedback))
Key Information
- Category: Image Models
- Type: AI Image Models Tool