LHM - AI Image Tools Tool
Overview
LHM (Large Animatable Human Reconstruction Model) is an open-source PyTorch implementation that reconstructs an animatable 3D human from a single input image in seconds. The project targets fast, feed‑forward reconstruction and produces animatable outputs suitable for downstream animation pipelines, game prototyping, VR, and VFX workflows. LHM family models (LHM‑MINI, LHM‑500M, LHM‑1B) were trained on large video collections and synthetic data, and trade off speed and fidelity: the smallest model runs in ~1.4s while the largest model provides higher-detail outputs at longer inference times. ([github.com](https://github.com/aigc3d/LHM?utm_source=openai)) The repository includes GPU‑optimized pipelines, a Docker image, Gradio demo apps (including a motion‑aware demo), ComfyUI nodes for animation extraction and inference, and scripts to download pretrained weights from ModelScope or HuggingFace. The authors provide memory‑saving variants to run on 14–24 GB GPUs and offer detailed installation instructions for Linux and Windows, plus example motion data and evaluation scripts (PSNR, SSIM, LPIPS, face similarity). LHM was accepted to ICCV 2025 and the project actively publishes model cards and pretrained checkpoints for easy integration. ([github.com](https://github.com/aigc3d/LHM?utm_source=openai))
GitHub Statistics
- Stars: 2,520
- Forks: 199
- Contributors: 2
- License: Apache-2.0
- Primary Language: Python
- Last Updated: 2025-07-15T05:57:26Z
According to the project repository, LHM is released under the Apache‑2.0 license and shows active development through mid‑2025, including a June 26, 2025 note about ICCV acceptance and multiple April 2025 feature updates (memory‑saving builds, LHM‑MINI, ComfyUI nodes). ([github.com](https://github.com/aigc3d/LHM?utm_source=openai)) Repository/community metrics (provided by project metadata): 2,520 stars, 199 forks, and 2 listed contributors. The combination of relatively high stars and a small core contributor count suggests strong interest with a compact maintainer team; community contributions include a feat/comfyui branch, tutorial videos, and requests for community help on Windows installation and ComfyUI documentation. Recent commits (last commit timestamped 2025‑07‑15) indicate ongoing maintenance. ([github.com](https://github.com/aigc3d/LHM?utm_source=openai))
Installation
Install via docker:
git clone [email protected]:aigc3d/LHM.gitcd LHM# Docker (Linux) - requires nvidia-docker and CUDA 12.1 imagewget -P ./lhm_cuda_dockers https://virutalbuy-public.oss-cn-hangzhou.aliyuncs.com/share/aigc3d/data/for_lingteng/LHM/LHM_Docker/lhm_cuda121.tarsudo docker load -i ./lhm_cuda_dockers/lhm_cuda121.tarsudo docker run -p 7860:7860 -v /host/path:/DOCKER_WORKSPACES -it lhm:cuda_121 /bin/bash# Alternative local (tested): create venv and run installer scripts (Windows example shown in repo)python -m venv lhm_envlhm_env\Scripts\activateinstall_cu121.bat # or run install_cu118.sh / install_cu121.sh on Linux as documentedpython ./app.py # starts local Gradio app (models download automatically if needed)# Download pretrained models via HuggingFacefrom huggingface_hub import snapshot_download; snapshot_download(repo_id='3DAIGC/LHM-MINI', cache_dir='./pretrained_models/huggingface') Key Features
- Single-image to animatable 3D human reconstruction in seconds (LHM‑MINI ~1.41s, LHM‑500M ~2.01s, LHM‑1B ~6.57s).
- Pretrained model family (LHM‑MINI, LHM‑500M, LHM‑1B) trained on 300K videos + 5K synthetic samples.
- ComfyUI nodes for motion extraction and animation inference; enables 10s clips generated in ~20s with offline motion.
- Docker and Windows installers plus memory‑saving variants to run on 14–24 GB GPUs.
- Gradio demo apps supporting user motion sequences and video preprocessing pipelines.
- Pretrained weights available via HuggingFace and ModelScope for easy programmatic download.
Community
The project has attracted substantial attention (high star count) while being maintained by a small core team; it accepts contributions and provides community assets like ComfyUI nodes, tutorial videos, and example motion data. The repo includes open issues and a feature branch for ComfyUI integration; maintainers ask for community help with Windows ComfyUI docs and onboarding. Pretrained models are published on HuggingFace and ModelScope, and the README links to step‑by‑step install videos and example data to lower the adoption barrier. ([github.com](https://github.com/aigc3d/LHM?utm_source=openai))
Key Information
- Category: Image Tools
- Type: AI Image Tools Tool