Draw Things
AppFreeNative Apple app for local AI image generation with Metal acceleration.
Capabilities13 decomposed
local text-to-image generation with metal-accelerated inference
Medium confidenceExecutes Stable Diffusion and FLUX models directly on Apple Silicon devices using Metal GPU acceleration, downloading models to local storage and performing inference without cloud transmission. The architecture leverages Metal's compute shaders for parallel tensor operations, enabling real-time generation on M-series chips while maintaining complete data privacy for prompts and generated images in the free tier.
Implements Metal-native GPU inference pipeline specifically optimized for Apple Silicon's unified memory architecture, avoiding cloud transmission entirely in free tier and enabling sub-second token generation through Metal's compute shader parallelization — differentiating from cloud-first competitors like Midjourney or DALL-E
Faster than cloud-based generators for users with M-series hardware due to zero network latency and local GPU optimization, and more private than Midjourney/DALL-E since prompts and images never leave the device in free tier
lora-based model fine-tuning and adaptation on-device
Medium confidenceSupports Low-Rank Adaptation (LoRA) training directly on Apple Silicon devices, allowing users to fine-tune base models (Stable Diffusion, FLUX) with custom datasets without cloud infrastructure. The implementation uses LoRA's parameter-efficient approach (adapting only low-rank matrices rather than full model weights) to reduce memory footprint and training time, with trained LoRAs stored locally and optionally uploaded to Draw Things+ cloud for inference.
Implements on-device LoRA training using Metal-optimized matrix operations, eliminating cloud training costs and data transmission — most competitors (Civitai, Hugging Face) require uploading datasets to cloud infrastructure or using separate training services
Cheaper and faster than cloud-based LoRA training services (no per-epoch billing) and more private since training data never leaves the device, though slower than GPU-cluster training due to single-device constraints
enterprise api integration for third-party application embedding
Medium confidenceProvides programmatic access to Draw Things' inference capabilities (local or cloud) for integration into third-party applications, enabling developers to embed image generation into their own tools. The implementation exposes an API (specification unspecified) with authentication and supports both local device inference and cloud compute, though exact endpoint structure, authentication mechanism, and SDK availability are undocumented.
Offers enterprise API for embedding Draw Things inference into third-party applications with optional on-premise deployment — most competitors (Midjourney, DALL-E) don't expose APIs for third-party integration; Stable Diffusion API is open but requires self-hosting
More flexible than cloud-only competitors because on-premise option enables data residency and offline operation; more integrated than self-hosted Stable Diffusion because Draw Things handles model management and optimization
batch image generation with parameter variation
Medium confidenceGenerates multiple images in sequence with varying parameters (different prompts, seeds, guidance scales, or models) to explore design space efficiently. The implementation queues generation tasks and executes them sequentially on local hardware or cloud infrastructure, allowing users to specify parameter ranges or lists and receive multiple outputs.
unknown — insufficient data on whether batch generation is implemented, how it's exposed in UI, or how it differs from competitors' batch capabilities
If implemented, batch generation on local hardware would be faster than cloud-based batch services due to zero network latency per image; more cost-effective than cloud services for large batches
prompt engineering and generation parameter tuning
Medium confidenceProvides UI controls and presets for fine-tuning generation parameters (guidance scale, sampling steps, seed, sampler algorithm, negative prompts) to control output quality, style, and consistency. The implementation exposes these parameters through sliders, text inputs, and preset templates, allowing users to iteratively refine generation without code.
unknown — insufficient data on which parameters are exposed, how they're presented in UI, or what presets/templates are available
If comprehensive parameter exposure is provided, more flexible than competitors' limited controls (Midjourney exposes only aspect ratio and quality); more accessible than command-line tools because UI-based
inpainting and selective region image editing
Medium confidenceEnables targeted image modification by accepting a base image, mask, and text prompt, then regenerating only the masked region using the diffusion model while preserving unmasked areas. The implementation uses latent-space inpainting (encoding the image to latent space, masking the latent representation, and diffusing only masked regions) to maintain coherence with surrounding content while applying new generation semantics from the prompt.
Implements latent-space inpainting directly on-device using Metal acceleration, avoiding cloud transmission of images and enabling real-time mask refinement — most cloud competitors (Photoshop Generative Fill, Runway) require uploading full images to servers
Faster iteration than cloud-based inpainting due to zero network latency and local GPU access, and more private since edited images never leave the device in free tier
infinite canvas expansion with directional fill generation
Medium confidenceExtends image boundaries in any direction (up, down, left, right, or arbitrary angles) by generating new content that seamlessly blends with existing edges. The implementation uses outpainting (a variant of inpainting where the model generates content outside the original image bounds) combined with edge-aware context blending to maintain visual continuity and perspective consistency across the expanded canvas.
Implements directional outpainting with edge-aware context preservation on-device, allowing users to expand images in real-time without cloud submission — differentiating from Photoshop's Generative Expand which requires cloud processing
Faster and more private than cloud-based outpainting tools, with immediate local feedback for iterative composition refinement
controlnet-based image generation with structural guidance
Medium confidenceIntegrates ControlNet (a neural network adapter that conditions diffusion models on structural inputs like edge maps, depth maps, pose skeletons, or semantic segmentation) to guide image generation toward specific compositions, layouts, or structural constraints. The implementation loads ControlNet weights alongside base models and uses multi-scale feature injection to influence generation while maintaining semantic fidelity to text prompts.
Implements ControlNet inference on-device with Metal optimization, enabling real-time structural guidance without cloud submission — most competitors (Midjourney, DALL-E) don't expose ControlNet or require cloud processing
More flexible than competitors' built-in composition tools (Midjourney's aspect ratio, DALL-E's region selection) because ControlNet supports pose, depth, and edge guidance; faster than cloud-based ControlNet services due to local GPU execution
image-to-video generation with temporal coherence
Medium confidenceConverts static images into short video clips by generating frame sequences that maintain visual consistency and smooth motion. The implementation approach is unspecified in documentation (could be frame interpolation, latent-space video diffusion, or optical flow-based synthesis), but the capability enables animation of still images with semantic understanding of motion from text prompts.
Implements image-to-video generation locally on Apple Silicon, avoiding cloud submission of images — most competitors (Runway, Pika) require cloud processing; implementation approach (interpolation vs diffusion) unknown but likely optimized for Metal
More private than cloud-based video generators since images never leave device; faster iteration than cloud services due to local GPU access, though likely slower per-video due to single-device compute constraints
style transfer and artistic transformation
Medium confidenceApplies artistic styles to images (e.g., photo-to-animation, realistic-to-cartoon, style-specific rendering) by using diffusion models conditioned on style descriptors or reference images. The implementation leverages text prompts with style keywords and optional reference images to guide the model toward specific artistic outputs while preserving content structure.
Implements style transfer on-device using diffusion models with Metal acceleration, avoiding cloud submission of images — most competitors (Photoshop Neural Filters, online style transfer tools) require cloud processing
Faster and more private than cloud-based style transfer tools; more flexible than traditional neural style transfer (Gatys et al.) because it uses semantic understanding from diffusion models rather than texture matching
try it on — virtual apparel and character concept prototyping
Medium confidenceGenerates images of clothing, accessories, or character designs applied to reference images (e.g., 'see how this shirt looks on a model' or 'visualize this character design on different body types'). The implementation uses conditional image generation with spatial awareness to place and adapt designs onto reference images while maintaining realistic lighting, proportions, and integration.
Implements spatial-aware conditional generation for apparel and character design on-device, enabling real-time prototyping without cloud submission — most competitors (Shopify's design tools, fashion platforms) require cloud processing or manual mockup creation
Faster iteration than manual design mockups and more private than cloud-based virtual try-on tools; enables rapid exploration of design variations without physical prototyping
hybrid local-to-cloud inference with tier-based compute offload
Medium confidenceProvides optional cloud inference via Draw Things' managed servers for users on Community and Draw Things+ tiers, allowing users to offload generation to cloud infrastructure when local device is unavailable or when higher-speed inference is desired. The architecture maintains local-first default (free tier) while enabling seamless cloud fallback through account-based tier detection and API routing.
Implements transparent local-to-cloud routing based on account tier, allowing users to seamlessly switch between local and cloud inference without changing UI or workflow — most competitors (Midjourney, DALL-E) are cloud-only; Stable Diffusion WebUI requires manual server switching
More flexible than cloud-only competitors because local inference is always available as fallback; more convenient than manual server switching because tier-based routing is automatic
model download and local caching with version management
Medium confidenceManages model lifecycle (discovery, download, storage, versioning) through Draw Things' proprietary model server, allowing users to browse available models (Stable Diffusion, FLUX, and community models), download them to local storage, and switch between versions. The implementation caches downloaded models locally to avoid re-downloading and provides UI for model selection and management.
Implements proprietary model repository with local caching and version switching, providing centralized discovery and management — most competitors (Automatic1111, ComfyUI) require manual Hugging Face downloads and folder management
More user-friendly than manual model management because downloads and switching are integrated into UI; more curated than open repositories because Draw Things vets models for quality and compatibility
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with Draw Things, ranked by overlap. Discovered automatically through the match graph.
Stable Diffusion Public Release
Announcement of the public release of Stable Diffusion, an AI-based image generation model trained on a broad internet scrape and licensed under a Creative ML OpenRAIL-M license. Stable Diffusion blog, 22 August, 2022.
MediaPipe
Google's cross-platform on-device ML framework with pre-built solutions.
FLUX.1-RealismLora
FLUX.1-RealismLora — AI demo on HuggingFace
Replicate
Run ML models via API — thousands of models, pay-per-second, custom model deployment via Cog.
Civitai
Harness AI to create, share, and innovate in multimedia content...
Lensa
An all-in-one image editing app that includes the generation of personalized avatars using Stable...
Best For
- ✓Privacy-conscious creative professionals and hobbyists
- ✓Apple ecosystem users with M-series hardware (M1, M2, M3, M4 Macs)
- ✓Teams requiring offline image generation for sensitive workflows
- ✓Individual creators wanting zero cloud dependency for core generation
- ✓Individual artists and designers wanting personalized model variants
- ✓Small teams building brand-specific image generation pipelines
- ✓Creators who want training data to remain on-device (not uploaded to cloud)
- ✓Users with M3/M4 Macs (higher VRAM for training workloads)
Known Limitations
- ⚠Apple Silicon only — no Windows, Linux, or Intel Mac support
- ⚠Inference speed varies significantly by device (M1 vs M4 performance gap can be 2-3x)
- ⚠Model download required before first use (Stable Diffusion ~4GB, FLUX ~24GB) — requires local storage and bandwidth
- ⚠Exact inference latency unspecified in documentation; 'minutes' per generation suggests slower than cloud alternatives
- ⚠No batch generation capability documented; single-image-at-a-time workflow
- ⚠LoRA training available only in free tier (local-only); no cloud training option documented
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
About
Native macOS and iOS application for running Stable Diffusion, FLUX, and other image generation models locally on Apple Silicon with Metal acceleration, offering LoRA support, ControlNet, inpainting, and optimized performance without cloud dependencies.
Categories
Alternatives to Draw Things
Are you the builder of Draw Things?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →