PocketPal AI - AI Local Apps Tool

Overview

PocketPal AI is an open-source mobile application that brings on-device language model assistance to iOS and Android devices. It enables users to download, load, and interact with small language models (SLMs) directly on their phones, without relying on cloud APIs. The app exposes configurable inference settings and runtime performance metrics so users can balance latency, memory use, and responsiveness for different models and devices. Designed for privacy-sensitive and offline-first use cases, PocketPal AI focuses on delivering practical, locally-run LLM experiences on consumer phones. According to the GitHub repository, the project is published under the MIT license and has an active community (5,411 stars, 513 forks). The repo documents mobile-focused features like model download management, model runtime controls, and on-device performance reporting, making it suitable for developers and advanced users who want to experiment with small language models on Android and iOS.

GitHub Statistics

  • Stars: 5,411
  • Forks: 513
  • Contributors: 14
  • License: MIT
  • Primary Language: TypeScript
  • Last Updated: 2025-12-30T08:46:55Z
  • Latest Release: v1.11.12

Repository activity shows strong community interest with 5,411 stars and 513 forks, and 14 contributors according to the GitHub repository. The project is MIT-licensed and appears actively maintained, with the last recorded commit on 2025-12-30. The contributor count and fork volume indicate ongoing development and external contributions; issues, PRs, and the commit history on GitHub are the best sources for deeper activity patterns and roadmap signals.

Installation

Install via git:

git clone https://github.com/a-ghorbani/pocketpal-ai.git
cd pocketpal-ai
Open the ios/ directory in Xcode to build the iOS app, or open the android/ directory in Android Studio to build the Android app

Key Features

  • Download and run small language models (SLMs) locally on iOS and Android devices
  • Load and switch between multiple downloaded models without internet connectivity
  • Customizable inference settings for latency, accuracy, and resource trade-offs
  • Runtime performance metrics: memory, CPU usage, and inference timing
  • Privacy-first on-device operation—no cloud API calls required for inference

Community

Active open-source community with 5,411 stars, 513 forks, and 14 contributors on GitHub. The project is MIT-licensed, accepts issues and pull requests on GitHub, and shows recent commits as of 2025-12-30, indicating ongoing maintenance and community engagement.

Last Refreshed: 2026-01-09

Key Information

  • Category: Local Apps
  • Type: AI Local Apps Tool