UNfilteredAI-1B - AI Language Models Tool

Overview

UNfilteredAI-1B is a 1‑billion‑parameter causal text generation model published by UnfilteredAI on the Hugging Face Hub. It is explicitly designed for "unfiltered" or uncensored generation, prioritizing creative and unconstrained output rather than built‑in refusal behavior or content moderation. The model is provided as safetensors in float16 and packaged for local or self‑hosted inference, enabling developers and researchers to experiment with boundary‑pushing generation workflows. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)) Because the project emphasizes unrestricted outputs, the model card calls out ethical risks — possible harmful, biased, or explicit content — and recommends the model only for experienced users who plan to add their own safety layers or monitoring. The Hugging Face repository shows the model files, tokenizer assets, and a history of commits; the most recent README update was committed on April 16, 2024. Downloads and community interest are modest (hundreds of downloads reported on the model page). These traits make UNfilteredAI-1B suitable for research into alignment, creative writing prototypes, and local experimentation where end‑users control moderation. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B))

Model Statistics

  • Downloads: 263
  • Likes: 30
  • Pipeline: text-generation
  • Parameters: 1.1B

License: other

Model Details

Architecture and format: UNfilteredAI-1B is a LLaMA‑style causal LM (Hugging Face architecture name LlamaForCausalLM) provided in safetensors shards and configured to run in float16. The repository config lists a 21‑layer model with hidden_size 2048, 32 attention heads, intermediate_size 5636, a 32k vocabulary, and max position embeddings 2048. The model card and file tree report a ~1B parameter class and the stored safetensors files (two shards) used for distribution. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/blob/main/config.json)) Performance and deployment: The model is not listed as deployed by any commercial inference provider on the Hub, so users generally self‑host or run it in community Spaces. Public benchmark aggregates (third‑party indexers) report a relatively low LLME score compared with larger or instruction‑tuned models — useful context for expectation setting but not a substitute for task‑specific evaluation. Key technical artifacts in the repo include model.safetensors shards, tokenizer.model, tokenizer.json, and config.json, making the model compatible with standard Hugging Face Transformers workflows. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B))

Key Features

  • Unfiltered generation: no built‑in refusal or content filters on outputs.
  • LLaMA‑style causal LM: LlamaForCausalLM architecture (21 layers, ~1B class).
  • Distributed as safetensors in float16 for efficient local inference.
  • Includes tokenizer files (tokenizer.model, tokenizer.json) for standard pipelines.
  • Targeted at research, creative writing, and alignment/safety experiments.

Example Usage

Example (python):

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# WARNING: UNfilteredAI-1B can produce uncensored and potentially harmful content.
# Use only in controlled settings with safety/monitoring added.

repo_id = "UnfilteredAI/UNfilteredAI-1B"

tokenizer = AutoTokenizer.from_pretrained(repo_id, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    repo_id,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=False
)

prompt = "Write a short, imaginative opening paragraph for a dark fantasy story."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate (adjust max_new_tokens, temperature, etc. to taste)
out = model.generate(**inputs, max_new_tokens=150, temperature=0.9, do_sample=True)
print(tokenizer.decode(out[0], skip_special_tokens=True))

# See the model hub for repo details and license information before use.

Benchmarks

Parameter count: 1B parameters (model card) (Source: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)

Downloads (last month): 263 downloads (last month, as shown on model page) (Source: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)

Tensor type / dtype: safetensors, float16 (F16) (Source: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)

Architecture (config): LlamaForCausalLM — 21 layers, hidden_size 2048, 32 attention heads, vocab_size 32000 (Source: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/blob/main/config.json)

Aggregated LLME score (third party): LLME: ~0.1638 (third‑party indexer) (Source: https://llm-explorer.com/model/UnfilteredAI%2FUNfilteredAI-1B%2C4FVJhIivHn8DuABujI35Xh)

Last Refreshed: 2026-02-24

Key Information

  • Category: Language Models
  • Type: AI Language Models Tool