UNfilteredAI-1B - AI Language Models Tool
Overview
UNfilteredAI-1B is a 1-billion-parameter causal text‑generation model released on Hugging Face by UnfilteredAI, positioned for creative and unconstrained text generation. The model card describes UNfilteredAI-1B as intentionally "uncensored and unrestricted", aiming to enable exploration of controversial or sensitive topics for research and creative use while flagging clear ethical risks for misuse. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)) Practically, the repository on Hugging Face supplies safetensors weights and tokenizer files (model distributed as two safetensors shards plus tokenizer assets) and lists the model as a Llama-style causal LM (LlamaForCausalLM). The project is community-used for local inference and has multiple community quantized/converted variants (GGUF / llama.cpp) to enable efficient CPU/GPU inference. Users are explicitly advised to be experienced and responsible due to possible harmful, biased, or explicit outputs. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/tree/main))
Model Statistics
- Downloads: 309
- Likes: 28
- Pipeline: text-generation
- Parameters: 1.1B
License: other
Model Details
Model architecture and weights: UNfilteredAI-1B is provided as a Llama-style causal language model (transformers architecture name: LlamaForCausalLM) with model_type set to "llama" in the repository config. The published config shows 21 transformer layers, hidden_size 2048, 32 attention heads, intermediate_size 5636, max position embeddings 2048, and a vocab size of 32,000. The checkpoint is stored in FP16/safetensors format. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/blob/main/config.json)) Model files and distribution: The Hugging Face repo contains two safetensors shard files (total ~2.11 GB) together with tokenizer files and a config.json. The model card reports "1B params" and lists the tensor type as F16. There is no official inference provider deployment listed on the model page; instead, the community has published quantized conversions (GGUF) and llama.cpp-compatible builds that let users run the model locally in lower-bit formats for CPU/GPU inference. Examples of community conversions and usage instructions are available (GGUF conversions and llama.cpp CLI/server examples). ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B/tree/main)) License and intended use: The repository lists the license as "other"; consumers should check the model card and repo before commercial or regulated use. The model card and README emphasize limitations (bias, potential for explicit/harmful outputs) and recommend responsible use by experienced developers. ([huggingface.co](https://huggingface.co/UnfilteredAI/UNfilteredAI-1B))
Key Features
- Uncensored text generation for open-ended creative exploration.
- Llama‑style causal architecture (LlamaForCausalLM) with 21 layers and 1B parameters.
- Distributed as FP16 safetensors (two shards) with accompanying tokenizer files.
- Community quantizations and GGUF/llama.cpp conversions for efficient local inference.
- Model card highlights explicit limitations and strong ethical/misuse warnings.
Example Usage
Example (python):
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
model_id = "UnfilteredAI/UNfilteredAI-1B"
# Load tokenizer and model (FP16 weights on available device)
# If you have limited memory, consider using quantized community builds (gguf/llama.cpp).
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
generator = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map="auto")
prompt = "Write a short, surreal opening paragraph for a sci-fi story:"
outputs = generator(prompt, max_new_tokens=120, do_sample=True, temperature=0.9, top_p=0.95)
print(outputs[0]["generated_text"]) Benchmarks
Public benchmark scores: No official benchmark scores published on the model card or repository (Source: https://huggingface.co/UnfilteredAI/UNfilteredAI-1B)
Key Information
- Category: Language Models
- Type: AI Language Models Tool