reor
RepositoryFreePrivate & local AI personal knowledge management app for high entropy people.
Capabilities13 decomposed
local-first semantic search across markdown notes with hybrid keyword-vector matching
Medium confidenceReor implements semantic search by embedding note content using Transformers.js (client-side ONNX models) and storing vectors in LanceDB, a local vector database with native bindings. The system supports both pure vector similarity search and hybrid mode combining semantic matching with keyword indexing, enabling full-text discovery without cloud API calls. Search operates entirely on-device with no data transmission, using LanceDB's columnar storage for fast approximate nearest neighbor queries across note collections.
Uses Transformers.js for client-side embedding generation instead of API calls, combined with LanceDB's native bindings for platform-optimized vector storage, enabling zero-network-latency semantic search with full data privacy. Hybrid mode implementation merges vector similarity with keyword matching at query time rather than pre-computing combined scores.
Faster than Pinecone/Weaviate for local use cases (no network round-trip) and more privacy-preserving than cloud vector DBs; slower than specialized FAISS implementations but with better multi-platform support and easier integration with Electron apps.
automatic bidirectional note linking via vector similarity clustering
Medium confidenceReor automatically discovers and surfaces related notes by computing vector similarity between note embeddings and clustering semantically similar content. The system runs in the background, generating embeddings for all notes and maintaining a similarity graph that populates a sidebar panel showing related notes while editing. This creates a knowledge graph without requiring manual wiki-style link syntax, using the same embedding infrastructure as semantic search to identify conceptual relationships.
Implements automatic linking through continuous vector similarity computation rather than explicit backlink syntax or manual curation, creating emergent knowledge graphs that evolve as note content changes. Bidirectional linking is computed on-demand when notes are opened, avoiding expensive pre-computation of full similarity matrices.
More discoverable than Obsidian's manual backlink system and more privacy-preserving than cloud-based note-linking services; less precise than human-curated links but requires zero manual effort to maintain.
conversation history management with context preservation
Medium confidenceReor maintains conversation history in the chat interface, storing user messages and LLM responses with timestamps. The system preserves conversation context by including previous messages when generating new responses, enabling multi-turn dialogue. Conversation history is stored in-memory during the session; users can optionally save conversations to disk for later reference. The system manages context window constraints by truncating older messages if the full history exceeds the LLM's context limit.
Manages conversation history with context window awareness, automatically truncating older messages to fit within LLM limits. Conversations can be saved to disk as JSON or markdown for persistence and sharing.
Simpler than ChatGPT's conversation management; no built-in search or organization but sufficient for single-session use cases.
multi-platform desktop application packaging via electron with auto-updates
Medium confidenceReor is built as an Electron application that runs on macOS (x64/ARM), Windows (x64), and Linux (x64), providing a native desktop experience across platforms. The build system packages the application for each platform with platform-specific optimizations (e.g., ARM support for Apple Silicon). Auto-update functionality checks for new releases and prompts users to upgrade, with differential updates to minimize download size.
Packages Reor as a native Electron app with platform-specific optimizations (ARM support for Apple Silicon) and auto-update functionality. LanceDB native bindings are compiled for each platform, enabling optimized vector database performance.
More performant than web-based alternatives; larger download size and memory footprint than native apps but simpler to develop and maintain than separate native implementations.
optional cloud llm provider integration (openai, anthropic) with fallback support
Medium confidenceWhile Reor is designed for local-first operation, it supports optional integration with cloud LLM providers (OpenAI, Anthropic) for users who prefer higher-quality models or need specific capabilities. Users can configure API keys in settings and switch between local and cloud models at runtime. The system maintains a unified chat interface regardless of LLM provider, with fallback logic to use local models if cloud API calls fail.
Provides optional cloud LLM integration while maintaining local-first as default, with unified chat interface and fallback logic. Users can switch providers at runtime without changing application code.
More flexible than local-only systems; enables access to higher-quality models while preserving privacy-first design. Simpler than building separate cloud and local implementations.
rag-powered q&a chat with local llm and note context retrieval
Medium confidenceReor implements a Retrieval-Augmented Generation (RAG) chat system where user questions trigger semantic search across notes to retrieve relevant chunks, which are then passed as context to a local LLM (via Ollama or Transformers.js) for answer generation. The system manages a conversation history, formats retrieved note chunks as context, and streams LLM responses back to the UI. All processing occurs locally; no conversation data or note content is sent to external APIs unless explicitly configured to use cloud models (OpenAI/Anthropic).
Implements RAG by combining local semantic search (Transformers.js + LanceDB) with local LLM execution (Ollama), creating a fully offline Q&A system with no external API dependencies. Context retrieval is integrated into the chat flow via IPC communication between Electron main process (LLM execution) and renderer (UI), with streaming responses for real-time feedback.
More private than ChatGPT plugins or cloud-based RAG services; slower response times than API-based alternatives but eliminates data transmission and API costs.
markdown note editing with syntax highlighting and backlink visualization
Medium confidenceReor provides an Obsidian-like markdown editor built into the Electron renderer process, supporting syntax highlighting, real-time preview, and backlink/wikilink syntax (`[[note-name]]`). The editor integrates with the note filesystem layer to enable creating, editing, and linking notes within the PKM system. Backlinks are rendered as clickable references that navigate to linked notes, and the editor supports standard markdown formatting with code block syntax highlighting.
Integrates markdown editing directly into Electron app with real-time backlink visualization and wikilink navigation, avoiding the need for external editors. Backlinks are computed from the vector similarity graph, so related notes surface automatically even without explicit `[[links]]`.
More integrated than using VS Code or external editors; less feature-rich than Obsidian but tightly coupled with local AI capabilities for automatic linking and RAG.
local llm execution via ollama integration with model switching
Medium confidenceReor integrates with Ollama, a local LLM runtime, to execute language models entirely on the user's machine. The system allows users to configure which Ollama model to use for chat and text generation, with support for switching models without restarting the app. The main process communicates with Ollama via HTTP API calls, streaming responses back to the renderer for real-time display. Users can also configure cloud-based LLM providers (OpenAI, Anthropic) as fallbacks or alternatives.
Abstracts LLM execution behind a unified interface that supports both local Ollama models and cloud APIs (OpenAI/Anthropic), allowing users to switch providers without changing application code. Model configuration is persisted in settings and can be changed at runtime without app restart.
More flexible than hardcoding a single LLM provider; slower than cloud APIs but eliminates API costs and data transmission. Ollama integration is simpler than managing LLM weights directly but requires external process management.
client-side embedding generation via transformers.js onnx models
Medium confidenceReor uses Transformers.js to run embedding models (e.g., all-MiniLM-L6-v2) directly in the browser/Electron renderer process using ONNX format. This eliminates the need for embedding API calls or external services; embeddings are generated on-device as notes are created or updated. The system manages model loading, caching, and batching of embedding requests to optimize performance. Embeddings are then stored in LanceDB for semantic search and similarity computation.
Runs embedding models in the Electron renderer process using Transformers.js ONNX models, avoiding any external API calls or main process overhead. Models are cached in memory and reused across embedding requests, with batching support for efficient bulk embedding of note collections.
More private than OpenAI Embeddings API; slower than GPU-accelerated embedding services but eliminates API costs and data transmission. Simpler to deploy than self-hosted embedding services like Ollama.
file system-based note persistence with directory structure support
Medium confidenceReor stores notes as markdown files in the local filesystem, organized in a directory structure that mirrors the user's knowledge base organization. The main process handles all filesystem operations (create, read, update, delete) via IPC communication with the renderer, ensuring thread-safe access and preventing concurrent modification conflicts. Notes are stored in a configurable vault directory, and the system maintains metadata (creation date, modification date) alongside note content.
Uses standard filesystem storage with markdown format, enabling portability and integration with external tools (git, syncthing, etc.). IPC-based filesystem access ensures main process handles all I/O, preventing race conditions in the renderer.
More portable than proprietary database formats; enables version control and backup via standard tools. Slower than in-memory or database-backed storage but provides durability and offline access.
settings persistence and configuration management with theme support
Medium confidenceReor implements a settings system that persists user configuration (LLM model choice, embedding model, theme, vault path) to disk via the main process. Settings are loaded on app startup and can be modified via the UI settings panel. The system supports theme switching (light/dark mode) with real-time UI updates via React Context, and LLM/embedding model configuration with validation. Settings are stored in a JSON file in the app's data directory.
Integrates settings management with theme switching via React Context, enabling real-time UI updates without app restart. LLM and embedding model configuration is validated at save time to prevent invalid model selections.
Simpler than external configuration management tools; less secure than encrypted config stores but sufficient for local-only use cases.
electron ipc-based inter-process communication for main/renderer separation
Medium confidenceReor uses Electron's IPC (Inter-Process Communication) system to enable secure communication between the main process (Node.js with filesystem/LLM access) and renderer process (React UI). The main process exposes handlers for filesystem operations, LLM inference, and embedding generation; the renderer invokes these via IPC calls and receives results asynchronously. This architecture provides security isolation (renderer cannot directly access filesystem) while enabling the UI to trigger backend operations.
Implements a preload script-based IPC API that exposes main process capabilities to renderer in a type-safe manner, with explicit handler registration for filesystem, LLM, and embedding operations. Streaming responses are supported via IPC channels for real-time feedback (e.g., LLM token streaming).
More secure than exposing Node.js APIs directly to renderer; adds latency compared to in-process execution but provides essential process isolation for Electron security model.
note chunking and context window management for rag
Medium confidenceReor automatically chunks long notes into smaller segments (by paragraph, sentence, or token count) to fit within LLM context windows and improve retrieval precision. When answering questions via RAG, the system retrieves relevant chunks rather than entire notes, allowing more context to be included without exceeding token limits. Chunks are embedded separately and stored in the vector database with source attribution (note name, chunk index).
Implements automatic note chunking with source attribution, enabling RAG to retrieve precise note segments rather than entire notes. Chunks are embedded and indexed separately, improving retrieval precision for long-form content.
More precise than retrieving entire notes; requires careful chunking strategy to avoid splitting semantic units. Simpler than hierarchical chunking but less flexible.
Capabilities are decomposed by AI analysis. Each maps to specific user intents and improves with match feedback.
Related Artifactssharing capabilities
Artifacts that share capabilities with reor, ranked by overlap. Discovered automatically through the match graph.
MyMemo AI
Transform digital chaos into an organized, AI-enhanced knowledge...
obsidian-copilot
THE Copilot in Obsidian
Apple Notes
** - Talk with your Apple Notes
Obsidian Copilot
AI agent for Obsidian knowledge vault.
Saga
Digital AI assistant for notes, tasks, and tools
Limitless
An AI memory assistant for recording conversations and meetings, generating summaries, and searching past interactions across apps and an optional wearable.
Best For
- ✓Privacy-conscious researchers and knowledge workers managing 100+ notes
- ✓Teams building local-first PKM tools who need offline semantic search
- ✓Developers integrating RAG systems without cloud vector database dependencies
- ✓Individual researchers building personal knowledge bases organically
- ✓Teams using Reor as a second-brain tool who want emergent structure
- ✓Users migrating from manual wiki-linking systems to automated discovery
- ✓Users conducting extended research sessions with multiple questions
- ✓Teams using Reor for collaborative knowledge exploration
Known Limitations
- ⚠Embedding generation is CPU-bound; initial indexing of large note collections (10k+ notes) may take minutes
- ⚠LanceDB native bindings are platform-specific (macOS x64/ARM, Windows x64, Linux x64); cross-platform support limited
- ⚠Hybrid search requires maintaining separate keyword indices alongside vector indices, increasing storage overhead by ~20-30%
- ⚠Vector dimension size fixed by embedding model choice; switching models requires full re-indexing
- ⚠Similarity threshold for 'related notes' is fixed or requires manual tuning; no adaptive clustering based on user feedback
- ⚠Bidirectional linking computation scales quadratically with note count; 10k+ notes may cause noticeable UI lag when opening notes
Requirements
Input / Output
UnfragileRank
UnfragileRank is computed from adoption signals, documentation quality, ecosystem connectivity, match graph feedback, and freshness. No artifact can pay for a higher rank.
Repository Details
Last commit: May 13, 2025
About
Private & local AI personal knowledge management app for high entropy people.
Categories
Alternatives to reor
Are you the builder of reor?
Claim this artifact to get a verified badge, access match analytics, see which intents users search for, and manage your listing.
Get the weekly brief
New tools, rising stars, and what's actually worth your time. No spam.
Data Sources
Looking for something else?
Search →