AI SDK Provider for Gemini CLI - AI SDKs and Libraries Tool
Overview
AI SDK Provider for Gemini CLI is an unofficial community provider that plugs Google’s Gemini models into the Vercel AI SDK. According to the project repository, the provider delegates model access through the Gemini CLI Core library and Google Cloud Code endpoints, allowing Vercel-hosted applications to call Gemini models without being locked into an official SDK. The provider is built with TypeScript and emphasizes type safety by exposing Zod schemas for inputs and outputs. The provider supports streaming responses, multimodal inputs (images and other non-text data via the Gemini CLI), and function/tool calling patterns so model outputs can trigger external actions. It additionally supports OAuth-based authentication flows to integrate with Google account credentials. Because it is a community (unofficial) implementation the repository recommends reviewing security and usage details before production use. According to the GitHub README and examples, the provider targets full TypeScript projects and prioritizes developer ergonomics when integrating Gemini into Vercel AI SDK-based apps.
GitHub Statistics
- Stars: 47
- Forks: 13
- Contributors: 3
- License: MIT
- Primary Language: TypeScript
- Last Updated: 2026-01-02T01:46:39Z
- Latest Release: v2.0.1
Key Features
- Streaming response support for low-latency, chunked model outputs
- Multimodal input handling through the Gemini CLI Core (images and other modalities)
- Tool/function calling to route model outputs to external actions or toolchains
- OAuth authentication support for Google account-based credential flows
- Full TypeScript support with Zod schemas for strong runtime/type validation
Code Examples
Python
import subprocess
import json
# Example: call the Gemini CLI to run a simple chat prompt and stream output.
# This uses the local `gemini` CLI binary that the provider relies on under the hood.
# Replace the prompt and CLI options according to your installed gemini CLI version.
proc = subprocess.Popen(
["gemini", "chat", "--model=gemini-1.5", "--stream"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
prompt = "Hello from Python via Gemini CLI!"
stdout, stderr = proc.communicate(prompt)
if proc.returncode == 0:
print("Model response:\n", stdout)
else:
print("Gemini CLI error:\n", stderr)
Curl
curl -X POST \
-H "Authorization: Bearer YOUR_OAUTH_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
https://generativelanguage.googleapis.com/v1beta2/models/MODEL:generateText \
-d '{
"prompt": {"text": "Say hello from curl via Google Cloud Generative API"},
"temperature": 0.2
}'
# Notes: Replace MODEL and the endpoint path to match the Cloud Code / Generative API
# version you are using. The provider repository delegates to Google Cloud endpoints;
# this curl is a direct example of calling a Generative API endpoint with OAuth bearer token. Javascript
import { createClient } from 'ai';
import GeminiProvider from 'ai-sdk-provider-gemini-cli';
// Example TypeScript/JavaScript usage with the Vercel AI SDK
const client = createClient({
provider: GeminiProvider({
// Provider options mirror the repository's documented keys; replace with real values
model: 'gemini-1.5',
// enable streaming if you want chunked responses
streaming: true,
// OAuth flow / credentials handled externally (e.g., environment or Vercel secret)
}),
});
async function run() {
const res = await client.chat.create({
messages: [
{ role: 'user', content: 'Write a short summary of the provider features.' },
],
});
console.log(await res.text());
}
run();
API Overview
- Authentication: OAuth
Key Information
- Category: SDKs and Libraries
- Type: AI SDKs and Libraries Tool