AI Sheets - AI Productivity Tool

Overview

AI Sheets is a spreadsheet-like web tool from Hugging Face that lets you apply large language and generative models directly to tabular data using natural-language prompts. Sheets let you create "model-backed" columns: instead of manually typing values, you define a prompt for a column and the sheet uses a selected model to generate or analyze cell values across rows. This makes it easy to run labeling, data enrichment, text transformation, and image generation tasks at scale without writing glue code. Designed for rapid experimentation, AI Sheets supports testing multiple models side-by-side, swapping models for a column, and running batch inference across entire datasets. It integrates with datasets you upload (CSV) or those hosted on the Hugging Face Hub, and supports text and image workflows—both analysis (e.g., classification/extraction) and generation (e.g., captions, synthetic images). The project is documented on the Hugging Face blog and relies on models available from the Hugging Face Hub and Inference API for execution (see https://huggingface.co/blog/aisheets and https://huggingface.co/docs/api-inference).

Key Features

  • Model-backed columns: define a prompt per column and populate cells via model inference
  • Compare outputs by swapping or running multiple models side-by-side for the same column
  • Image analysis and generation inside cells using text-to-image and vision-capable models
  • Import/export support for CSV and Hub-hosted datasets for rapid dataset iteration
  • Prompt templates and batch-run controls to apply consistent prompts across rows

Example Usage

Example (python):

import os
import requests
import pandas as pd

# Example: emulate an AI Sheets "model-backed column" using the Hugging Face Inference API.
# - Assumes you exported/imported a CSV from AI Sheets or are operating on the same tabular data.
# - Replace MODEL with a Hugging Face text model (e.g., 'google/flan-t5-large') and set HF_TOKEN.

HF_TOKEN = os.environ.get('HF_TOKEN')  # export HF_TOKEN='hf_...'
MODEL = 'google/flan-t5-large'
API_URL = f'https://api-inference.huggingface.co/models/{MODEL}'
HEADERS = {"Authorization": f"Bearer {HF_TOKEN}"}

# Load data
df = pd.read_csv('input_dataset.csv')

# Define a prompt template for a new model-backed column
prompt_template = (
    "Extract the sentiment and one-sentence summary from the following review:\n\n"
    "Review: {text}\n\nRespond as JSON with keys: sentiment, summary."
)

outputs = []
for text in df['review_text'].astype(str):
    payload = {"inputs": prompt_template.format(text=text)}
    resp = requests.post(API_URL, headers=HEADERS, json=payload)
    resp.raise_for_status()
    # Inference response format depends on model and task; parse accordingly
    result = resp.json()
    # If model returns plain text, extract JSON from the string; otherwise adapt to result structure
    if isinstance(result, dict) and 'error' in result:
        outputs.append({"sentiment": None, "summary": None})
    else:
        # naive handling: join text outputs if list
        if isinstance(result, list):
            text_out = result[0].get('generated_text') if isinstance(result[0], dict) else str(result[0])
        else:
            text_out = str(result)
        # Attempt to parse JSON out of text_out
        try:
            import json
            parsed = json.loads(text_out)
            outputs.append({
                "sentiment": parsed.get('sentiment'),
                "summary": parsed.get('summary')
            })
        except Exception:
            outputs.append({"sentiment": None, "summary": text_out[:200]})

# Attach model outputs as new columns and save
df['sentiment'] = [o['sentiment'] for o in outputs]
df['summary'] = [o['summary'] for o in outputs]
df.to_csv('annotated_dataset.csv', index=False)

print('Finished: saved annotated_dataset.csv')
Last Refreshed: 2026-01-09

Key Information

  • Category: Productivity
  • Type: AI Productivity Tool