STRIDE GPT - AI Security Tool

Overview

STRIDE GPT is an open-source, AI-powered threat modeling assistant that applies the STRIDE methodology to generate threat models, attack trees, and mitigations from supplied application details. It leverages OpenAI GPT models (and other supported LLM providers) to identify threats mapped to STRIDE categories, produce DREAD-style risk scoring for each identified threat, and output actionable artifacts such as Gherkin test cases and recommended mitigations. The tool is intended to accelerate threat modeling during design and review phases by converting textual system descriptions, diagrams, or component lists into structured threat analyses. According to the GitHub repository (https://github.com/mrwadams/stride-gpt), STRIDE GPT focuses on automating repetitive parts of threat modeling while keeping results human-reviewable: generated attack trees and mitigations are presented in readable form and can be edited or augmented. It supports multiple LLM backends, configurable prompts, and produces outputs useful for security reviews, test generation, and developer handoffs. Because it is an open-source project, teams can inspect or adapt its prompt templates and integration adapters to match organizational risk frameworks or CI workflows.

Installation

Install via docker:

git clone https://github.com/mrwadams/stride-gpt.git
cd stride-gpt
docker build -t stride-gpt .
docker run --rm -e OPENAI_API_KEY=YOUR_KEY -p 8080:8080 stride-gpt

Key Features

  • Generates STRIDE-classified threats from application descriptions and system components
  • Produces DREAD-style risk scores for each identified threat
  • Creates attack trees that map threat paths and affected assets
  • Suggests mitigations and countermeasures tied to each STRIDE category
  • Outputs Gherkin-format test cases for behavioral/acceptance testing

Community

STRIDE GPT is published as an open-source project on GitHub (see repository link). The project accepts issues and pull requests for bug reports, feature requests, and integrations. Because the repository hosts the source, contributors can review prompt templates, LLM adapters, and example configurations. For the latest activity, release notes, contributor guidelines, and licensing, check the repository’s README, Issues, and Pull Requests tabs to assess community engagement and recent updates.

Last Refreshed: 2026-01-09

Key Information

  • Category: Security
  • Type: AI Security Tool