selection-based ai text transformation with in-place replacement
Captures user-selected text in the VS Code editor, sends it to a configured LLM (OpenAI, Anthropic, or Gemini), and replaces the selection with the model's response in-place. Uses VS Code's TextEditor API to read selection boundaries and apply edits atomically, with configurable output modes (replace vs. new file). Integrates via keyboard shortcut (Alt+Shift+I by default) and Command Palette for frictionless invocation.
Unique: Integrates directly into VS Code's TextEditor API with atomic in-place replacement, avoiding context-switching to separate chat windows or panels. Uses VS Code SecretStorage for secure API key persistence across sessions, with automatic migration from legacy OpenAI globalState keys.
vs alternatives: Faster workflow than GitHub Copilot Chat for single-selection edits because it operates synchronously on the current selection without requiring panel navigation or chat context management.
file-level ai analysis and transformation
Processes an entire active file (not just selection) by sending its full content to the configured LLM, enabling whole-file operations like refactoring, code audits, or explanations. Accessible via dedicated `Ask GPT with File` command. Output can replace the file in-place or create a new file, configurable via `GPT: Change Output Mode`. Respects token limits and may truncate very large files in remote/virtual workspaces for safety.
Unique: Provides dedicated command for full-file operations distinct from selection-based editing, with safety guardrails for remote workspaces. Integrates with VS Code's file system abstraction to handle virtual and remote workspaces gracefully.
vs alternatives: More comprehensive than selection-based tools for whole-file refactoring because it processes the entire file context in a single request, avoiding fragmented edits across multiple selections.
debug logging with intentional secret exclusion
Provides debug logging for troubleshooting extension behavior, with intentional exclusion of API keys, secrets, and full prompt contents to prevent accidental credential exposure. Debug logs can be accessed via VS Code's Output panel. Enables developers to diagnose issues without risking credential leakage in logs.
Unique: Implements intentional secret exclusion in debug logs, prioritizing security over diagnostic completeness. Uses VS Code's Output panel for log access, integrating with native debugging workflows.
vs alternatives: More secure than tools with verbose logging because it excludes secrets and sensitive content by design, reducing accidental credential exposure in logs shared for debugging.
workspace-scoped instruction injection via .gpt-instruction files
Automatically discovers and prepends project-level instructions from `.gpt-instruction` files in the workspace root or parent directories to every AI query. Supports two lookup modes: `workspaceRoot` (reads from workspace folder root) and `nearestParent` (uses closest parent file, more expensive in large repos). Empty `.gpt-instruction` files suppress parent instructions. Content beyond configured max size is truncated with warning. Enables consistent project-wide prompting without manual instruction repetition.
Unique: Uses file system watchers and multi-root workspace awareness to dynamically resolve project instructions per folder, with explicit suppression via empty files. Integrates instruction injection at the prompt-building layer, ensuring all queries include project context without user intervention.
vs alternatives: More flexible than hardcoded system prompts because instructions are version-controlled alongside code and can be updated without restarting the extension or reconfiguring settings.
multi-provider llm abstraction with runtime provider switching
Abstracts OpenAI, Anthropic, and Google Gemini APIs behind a unified interface, allowing users to switch providers and models at runtime via `GPT: Change Provider` and `GPT: Change Model` commands. Maintains separate API keys per provider in VS Code SecretStorage. Supports built-in model lists per provider and custom model IDs. Model list can be refreshed online (requires API key). No code changes required to switch providers; configuration is entirely UI-driven.
Unique: Implements provider abstraction at the extension level, allowing seamless switching without code changes. Uses VS Code SecretStorage per-provider key management with automatic migration from legacy OpenAI globalState keys, ensuring backward compatibility.
vs alternatives: More flexible than single-provider tools like GitHub Copilot because users can switch providers and models without leaving VS Code or reconfiguring API keys, enabling cost optimization and capability comparison.
secure api key management with secretstorage persistence and session fallback
Stores API keys for OpenAI, Anthropic, and Gemini in VS Code SecretStorage (encrypted, OS-level credential store) when available. Falls back to session-only storage if SecretStorage is unavailable (e.g., in certain remote setups). Automatically migrates legacy OpenAI keys from globalState to SecretStorage on first run. Provides dedicated `GPT: Set API Key` and `GPT: Manage API Keys` commands for fast-path and bulk key management. Debug logs intentionally exclude secrets to prevent accidental exposure.
Unique: Leverages VS Code's native SecretStorage API for OS-level encryption, avoiding plaintext storage in extension globalState. Implements automatic migration from legacy OpenAI keys and intentional secret exclusion in debug logs, demonstrating security-first design.
vs alternatives: More secure than environment variable or config file storage because credentials are encrypted at the OS level and isolated per VS Code instance, reducing exposure surface compared to tools that require plaintext API keys in settings.
configurable output mode switching (in-place replacement vs. new file creation)
Allows users to toggle between two output modes via `GPT: Change Output Mode` command: (1) Replace Selection/File — overwrites the original text with AI response, or (2) New File — creates a new file with the response, leaving original untouched. Mode is global and applies to all subsequent queries until changed. Enables flexible workflows: destructive edits for refactoring, non-destructive for comparison or review.
Unique: Provides global output mode toggle without per-invocation configuration, simplifying UX for users with consistent workflows. Integrates with VS Code's file system and editor APIs to handle both in-place edits and new file creation transparently.
vs alternatives: More flexible than tools with fixed output modes (e.g., always in-place) because users can switch between destructive and non-destructive workflows without tool changes, supporting both rapid iteration and careful review.
configurable token limit enforcement with truncation warnings
Allows users to set a maximum token limit for AI queries via `GPT: Change Token Limit` command. When input (selection, file, or instructions) exceeds the limit, content is truncated with a warning displayed to the user. Prevents accidental API errors or excessive costs from oversized requests. Token limit is configurable per session but defaults are not documented.
Unique: Implements token limit enforcement at the prompt-building layer before API calls, preventing oversized requests from reaching the LLM. Provides user warnings on truncation, enabling informed decisions about content prioritization.
vs alternatives: More cost-aware than tools without token limits because it prevents accidental expensive API calls on large files, and provides visibility into truncation decisions.
+3 more capabilities