FLUX1.1 [pro] - AI Image Models Tool
Overview
FLUX1.1 [pro] is a text-to-image generative model announced on Replicate that prioritizes speed, quality, and prompt fidelity. The release positions FLUX1.1 [pro] as a successor to the original FLUX model with substantially faster inference (the announcement reports up to six times speed improvement), improved image quality, better adherence to user prompts, and more diverse outputs across multiple runs. A central capability is a built-in "prompt upsampling" feature that leverages a language model to expand and refine short or under-specified prompts into richer, more actionable instructions for the image generator, reducing manual prompt engineering and speeding iteration. The model is targeted at creators and teams who need rapid visual ideation, batch generation for concept exploration, and higher consistency between text intent and resulting imagery. The public announcement highlights the performance and feature improvements but does not publish detailed training-data or parameter-count specifications in the blog post. According to the Replicate post, FLUX1.1 [pro] is especially useful when users want multiple diverse samples quickly or when concise prompts need automated enrichment to get more detailed renderings (via the prompt upsampling pipeline) (source: https://replicate.com/blog/flux-1-1-pro-is-here). Pricing, exact model size, and full technical training details were not included in the announcement.
Key Features
- Up to 6× faster image generation versus the prior FLUX release
- Prompt upsampling: LLM-driven prompt expansion to reduce manual prompt engineering
- Improved prompt adherence for closer alignment with textual instructions
- Greater output diversity across seeds and multiple sampling runs
- Designed for rapid iteration and batch concept exploration
Example Usage
Example (python):
import requests
# Example two-step pipeline: 1) expand prompt with an LLM, 2) send expanded prompt to FLUX1.1 [pro]
# NOTE: Replace placeholders (API keys, endpoints, and model identifiers) with your provider's values.
OPENAI_API_KEY = "<your-openai-key>" # or any LLM provider you use for prompt upsampling
REPLICATE_API_URL = "https://api.replicate.com/v1/predictions" # example Replicate-style endpoint
REPLICATE_API_TOKEN = "<your-replicate-token>"
FLUX_MODEL = "flux/flux-1-1-pro" # replace with the exact model identifier if different
def expand_prompt_with_llm(short_prompt: str) -> str:
"""Use an LLM to expand/refine a short prompt into a detailed prompt. Replace with real LLM call."""
# This is a placeholder; insert your LLM provider call (OpenAI/HuggingFace/etc.).
refined = f"Detailed, vivid description for: {short_prompt}. Include lighting, materials, camera angle, and mood."
return refined
def generate_image_with_flux(prompt: str, width=768, height=768, num_outputs=4):
payload = {
"version": FLUX_MODEL,
"input": {
"prompt": prompt,
"width": width,
"height": height,
"num_outputs": num_outputs
}
}
headers = {
"Authorization": f"Token {REPLICATE_API_TOKEN}",
"Content-Type": "application/json"
}
resp = requests.post(REPLICATE_API_URL, json=payload, headers=headers)
resp.raise_for_status()
return resp.json()
if __name__ == "__main__":
short_prompt = "fantasy city at sunset"
expanded_prompt = expand_prompt_with_llm(short_prompt)
print("Expanded prompt:\n", expanded_prompt)
result = generate_image_with_flux(expanded_prompt, width=1024, height=1024, num_outputs=6)
print("Generation response:\n", result)
# Note: adapt payload keys to match the FLUX1.1 [pro] inference API if different. Benchmarks
generation speed improvement: 6x faster than predecessor (Source: https://replicate.com/blog/flux-1-1-pro-is-here)
Key Information
- Category: Image Models
- Type: AI Image Models Tool