node-based workflow graph execution with visual editor
Executes directed acyclic graphs (DAGs) of custom invocation nodes through a FastAPI-backed invocation system that serializes node definitions as OpenAPI schemas. The React frontend provides a visual node editor where users connect outputs to inputs, and the backend's BaseInvocation system deserializes and executes the graph sequentially or in parallel where dependencies allow. This enables non-linear, reusable generation pipelines without code.
Unique: Uses OpenAPI schema generation from Python type hints to automatically expose node parameters in the UI, enabling dynamic node discovery and validation without manual schema definition. The BaseInvocation system provides a unified interface for both built-in and user-defined nodes with automatic serialization/deserialization.
vs alternatives: More flexible than Stable Diffusion WebUI's linear pipeline because it supports arbitrary DAG topologies and custom node composition, while maintaining simpler mental model than pure code-based frameworks like ComfyUI through visual node connections.
unified canvas with inpainting, outpainting, and brush controls
Konva-based HTML5 canvas rendering system that manages multiple control layers (base image, mask, brush strokes, selection regions) with real-time compositing. The canvas supports inpainting (selective region regeneration) and outpainting (extending image boundaries) through mask-aware conditioning passed to the diffusion pipeline. Brush tools apply masks directly to the canvas layer system, which are then converted to conditioning tensors for the model.
Unique: Implements a layer-based canvas architecture where masks, brush strokes, and base images are managed as separate Konva layers with real-time compositing, allowing non-destructive editing and easy undo/redo. Masks are automatically converted to conditioning tensors that guide the diffusion model's generation.
vs alternatives: More intuitive than ComfyUI's mask node approach because the visual canvas provides immediate feedback on brush placement, while maintaining the flexibility to adjust mask parameters programmatically through the node system.
redux-based state management with rtk query for api caching
React frontend uses Redux for global state management (generation parameters, selected models, UI state) and RTK Query for automatic API response caching and synchronization. RTK Query handles cache invalidation when mutations occur (e.g., generating an image invalidates the gallery), reducing unnecessary API calls. The Redux store is persisted to localStorage, allowing the UI to restore state across browser sessions.
Unique: Uses RTK Query to automatically manage API cache invalidation based on mutations, reducing boilerplate compared to manual cache management. Redux state is persisted to localStorage, allowing UI state recovery across sessions.
vs alternatives: More predictable than Context API for complex state because Redux enforces unidirectional data flow, while more efficient than naive API polling because RTK Query handles cache invalidation automatically.
internationalization (i18n) with dynamic language switching
React frontend uses i18next library to manage translations across 10+ languages, with JSON translation files organized by feature. Language selection is stored in Redux state and localStorage, allowing users to switch languages without page reload. The system supports pluralization, interpolation, and context-specific translations. Missing translations fall back to English with a warning in development mode.
Unique: Uses i18next with JSON translation files organized by feature, allowing community contributions of translations without code changes. Language preference is stored in Redux state and localStorage for persistence.
vs alternatives: More maintainable than hardcoded strings because translations are centralized in JSON files, while more flexible than static translations because language can be switched dynamically without page reload.
configuration management with environment-based settings
Backend configuration system that reads settings from environment variables, YAML config files, and command-line arguments with a precedence order (CLI > env vars > config file > defaults). Configuration covers model paths, API settings, GPU memory limits, and feature flags. The system validates configuration at startup and provides helpful error messages for invalid settings. Configuration is exposed via REST API endpoint for frontend discovery.
Unique: Implements a three-level configuration hierarchy (CLI > env vars > config file > defaults) with validation at startup and exposure via REST API. Feature flags allow selective enabling/disabling of functionality without code changes.
vs alternatives: More flexible than hardcoded settings because configuration can be changed per environment, while simpler than external config servers (Consul, etcd) because it uses standard environment variables and YAML files.
multi-model management with format conversion and caching
Centralized model registry that discovers, downloads, caches, and converts between diffusion model formats (safetensors, ckpt, diffusers). The system maintains a model index with metadata (architecture, size, quantization level) and implements LRU caching with configurable memory limits to keep frequently-used models in VRAM. Format conversion happens on-disk before loading, and the model loader uses PyTorch's state_dict utilities to handle architecture mismatches.
Unique: Implements a model registry with automatic format conversion and LRU caching that abstracts away the complexity of managing multiple model architectures and formats. The system tracks model metadata (size, architecture, quantization) to make intelligent caching decisions and supports both Hugging Face Hub downloads and local file paths.
vs alternatives: More user-friendly than manual model management because it handles format conversion and caching automatically, while more flexible than cloud-based solutions because models stay local and can be managed programmatically through the invocation system.
controlnet integration with multi-layer conditioning
Pluggable conditioning system that chains multiple ControlNet models (edge detection, pose, depth, semantic segmentation) to guide diffusion generation. Each ControlNet is loaded as a separate model, processes input images through its encoder to produce conditioning tensors, and these tensors are concatenated and passed to the UNet's cross-attention layers. The system supports weighted blending of multiple ControlNets and dynamic ControlNet switching within a workflow.
Unique: Implements ControlNet as a pluggable conditioning layer that can be dynamically composed in workflows, with support for weighted blending of multiple ControlNets and automatic tensor concatenation for cross-attention injection. The system abstracts ControlNet loading and inference behind a unified conditioning interface.
vs alternatives: More composable than Stable Diffusion WebUI's ControlNet implementation because it supports arbitrary combinations of ControlNets in node graphs, while maintaining better performance than naive stacking through optimized tensor operations.
real-time websocket event streaming for generation progress
FastAPI WebSocket server that emits structured events (generation-started, step-completed, generation-finished, error) during image generation, allowing the React frontend to update progress bars, preview intermediate steps, and handle cancellation. Events are serialized as JSON and include metadata (step number, current image tensor, timing info). The backend maintains a queue of pending invocations and broadcasts events to all connected clients.
Unique: Uses FastAPI's native WebSocket support to emit structured events during generation, allowing the frontend to subscribe to specific invocation IDs and receive updates without polling. Events include intermediate image tensors, enabling preview of generation progress.
vs alternatives: More responsive than polling-based progress tracking because events are pushed from the server, while simpler than message-queue-based systems like RabbitMQ because it's built into FastAPI without external dependencies.
+5 more capabilities