ostris/flux-dev-lora-trainer - AI Training Tools Tool

Overview

ostris/flux-dev-lora-trainer is a Replicate-hosted training workflow that enables LoRA (Low-Rank Adaptation) fine-tuning of the FLUX.1-dev base model via the ai-toolkit. The service runs training jobs in the cloud on Nvidia H100 GPUs and returns custom-trained LoRA weights and artifacts, removing the need to provision and manage GPU infrastructure locally. According to the Replicate model page, the tool is intended for users who want to adapt FLUX.1-dev to domain-specific data while keeping compute, storage, and iteration times lower than full-model fine-tuning. The trainer integrates with Replicate's automated job orchestration and the ai-toolkit training utilities, supporting an end-to-end workflow: upload dataset, configure a LoRA training job, run on H100 hardware, and download the produced weights. Because it uses the LoRA approach, the trainer focuses on parameter-efficient adaptation (small weight deltas stored separately from the original model), which simplifies deployment and sharing of custom adaptations. For exact input fields, training options, and job lifecycle behavior, consult the Replicate model page listed on the project URL.

Key Features

  • LoRA-based fine-tuning of FLUX.1-dev for parameter-efficient domain adaptation
  • Hosted on Replicate with automated cloud orchestration and job lifecycle management
  • Runs training jobs on Nvidia H100 GPUs for high throughput training
  • Integrates with ai-toolkit training utilities for dataset and checkpoint management
  • Produces downloadable custom-trained LoRA weights and artifacts for deployment

Example Usage

Example (python):

import os
import time
import requests

# Illustrative example only — verify exact fields and endpoints on the model page:
API_TOKEN = os.getenv('REPLICATE_API_TOKEN')
TRAIN_URL = 'https://replicate.com/ostris/flux-dev-lora-trainer/train'

headers = {
    'Authorization': f'Token {API_TOKEN}',
    'Content-Type': 'application/json',
}

# Replace the payload keys with the actual input schema from the Replicate trainer page.
payload = {
    'inputs': {
        'training_data_path': 's3://your-bucket/your-dataset/',
        'validation_data_path': 's3://your-bucket/val-dataset/',
        'output_bucket': 's3://your-bucket/outputs/',
        # add other trainer-specific options (batch size, lr, LoRA rank, etc.)
    }
}

resp = requests.post(TRAIN_URL, headers=headers, json=payload)
resp.raise_for_status()
job = resp.json()
print('Started job:', job.get('id') or job)

# Polling example (structure depends on Replicate's response schema)
job_status_url = job.get('url') or job.get('status_url')
while job_status_url:
    status_resp = requests.get(job_status_url, headers=headers)
    status_resp.raise_for_status()
    status = status_resp.json()
    print('Status:', status.get('status'))
    if status.get('status') in ('succeeded', 'failed', 'canceled'):
        print('Final status:', status)
        break
    time.sleep(30)

# After success, download output artifacts using provided URLs in the job response.
Last Refreshed: 2026-01-09

Key Information

  • Category: Training Tools
  • Type: AI Training Tools Tool