reor vs vectra
Side-by-side comparison to help you choose.
| Feature | reor | vectra |
|---|---|---|
| Type | Repository | Repository |
| UnfragileRank | 48/100 | 41/100 |
| Adoption | 1 | 0 |
| Quality | 0 | 0 |
| Ecosystem | 1 |
| 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 13 decomposed | 12 decomposed |
| Times Matched | 0 | 0 |
Reor implements semantic search by embedding note content using Transformers.js (client-side ONNX models) and storing vectors in LanceDB, a local vector database with native bindings. The system supports both pure vector similarity search and hybrid mode combining semantic matching with keyword indexing, enabling full-text discovery without cloud API calls. Search operates entirely on-device with no data transmission, using LanceDB's columnar storage for fast approximate nearest neighbor queries across note collections.
Unique: Uses Transformers.js for client-side embedding generation instead of API calls, combined with LanceDB's native bindings for platform-optimized vector storage, enabling zero-network-latency semantic search with full data privacy. Hybrid mode implementation merges vector similarity with keyword matching at query time rather than pre-computing combined scores.
vs alternatives: Faster than Pinecone/Weaviate for local use cases (no network round-trip) and more privacy-preserving than cloud vector DBs; slower than specialized FAISS implementations but with better multi-platform support and easier integration with Electron apps.
Reor automatically discovers and surfaces related notes by computing vector similarity between note embeddings and clustering semantically similar content. The system runs in the background, generating embeddings for all notes and maintaining a similarity graph that populates a sidebar panel showing related notes while editing. This creates a knowledge graph without requiring manual wiki-style link syntax, using the same embedding infrastructure as semantic search to identify conceptual relationships.
Unique: Implements automatic linking through continuous vector similarity computation rather than explicit backlink syntax or manual curation, creating emergent knowledge graphs that evolve as note content changes. Bidirectional linking is computed on-demand when notes are opened, avoiding expensive pre-computation of full similarity matrices.
vs alternatives: More discoverable than Obsidian's manual backlink system and more privacy-preserving than cloud-based note-linking services; less precise than human-curated links but requires zero manual effort to maintain.
Reor maintains conversation history in the chat interface, storing user messages and LLM responses with timestamps. The system preserves conversation context by including previous messages when generating new responses, enabling multi-turn dialogue. Conversation history is stored in-memory during the session; users can optionally save conversations to disk for later reference. The system manages context window constraints by truncating older messages if the full history exceeds the LLM's context limit.
Unique: Manages conversation history with context window awareness, automatically truncating older messages to fit within LLM limits. Conversations can be saved to disk as JSON or markdown for persistence and sharing.
vs alternatives: Simpler than ChatGPT's conversation management; no built-in search or organization but sufficient for single-session use cases.
Reor is built as an Electron application that runs on macOS (x64/ARM), Windows (x64), and Linux (x64), providing a native desktop experience across platforms. The build system packages the application for each platform with platform-specific optimizations (e.g., ARM support for Apple Silicon). Auto-update functionality checks for new releases and prompts users to upgrade, with differential updates to minimize download size.
Unique: Packages Reor as a native Electron app with platform-specific optimizations (ARM support for Apple Silicon) and auto-update functionality. LanceDB native bindings are compiled for each platform, enabling optimized vector database performance.
vs alternatives: More performant than web-based alternatives; larger download size and memory footprint than native apps but simpler to develop and maintain than separate native implementations.
While Reor is designed for local-first operation, it supports optional integration with cloud LLM providers (OpenAI, Anthropic) for users who prefer higher-quality models or need specific capabilities. Users can configure API keys in settings and switch between local and cloud models at runtime. The system maintains a unified chat interface regardless of LLM provider, with fallback logic to use local models if cloud API calls fail.
Unique: Provides optional cloud LLM integration while maintaining local-first as default, with unified chat interface and fallback logic. Users can switch providers at runtime without changing application code.
vs alternatives: More flexible than local-only systems; enables access to higher-quality models while preserving privacy-first design. Simpler than building separate cloud and local implementations.
Reor implements a Retrieval-Augmented Generation (RAG) chat system where user questions trigger semantic search across notes to retrieve relevant chunks, which are then passed as context to a local LLM (via Ollama or Transformers.js) for answer generation. The system manages a conversation history, formats retrieved note chunks as context, and streams LLM responses back to the UI. All processing occurs locally; no conversation data or note content is sent to external APIs unless explicitly configured to use cloud models (OpenAI/Anthropic).
Unique: Implements RAG by combining local semantic search (Transformers.js + LanceDB) with local LLM execution (Ollama), creating a fully offline Q&A system with no external API dependencies. Context retrieval is integrated into the chat flow via IPC communication between Electron main process (LLM execution) and renderer (UI), with streaming responses for real-time feedback.
vs alternatives: More private than ChatGPT plugins or cloud-based RAG services; slower response times than API-based alternatives but eliminates data transmission and API costs.
Reor provides an Obsidian-like markdown editor built into the Electron renderer process, supporting syntax highlighting, real-time preview, and backlink/wikilink syntax (`[[note-name]]`). The editor integrates with the note filesystem layer to enable creating, editing, and linking notes within the PKM system. Backlinks are rendered as clickable references that navigate to linked notes, and the editor supports standard markdown formatting with code block syntax highlighting.
Unique: Integrates markdown editing directly into Electron app with real-time backlink visualization and wikilink navigation, avoiding the need for external editors. Backlinks are computed from the vector similarity graph, so related notes surface automatically even without explicit `[[links]]`.
vs alternatives: More integrated than using VS Code or external editors; less feature-rich than Obsidian but tightly coupled with local AI capabilities for automatic linking and RAG.
Reor integrates with Ollama, a local LLM runtime, to execute language models entirely on the user's machine. The system allows users to configure which Ollama model to use for chat and text generation, with support for switching models without restarting the app. The main process communicates with Ollama via HTTP API calls, streaming responses back to the renderer for real-time display. Users can also configure cloud-based LLM providers (OpenAI, Anthropic) as fallbacks or alternatives.
Unique: Abstracts LLM execution behind a unified interface that supports both local Ollama models and cloud APIs (OpenAI/Anthropic), allowing users to switch providers without changing application code. Model configuration is persisted in settings and can be changed at runtime without app restart.
vs alternatives: More flexible than hardcoding a single LLM provider; slower than cloud APIs but eliminates API costs and data transmission. Ollama integration is simpler than managing LLM weights directly but requires external process management.
+5 more capabilities
Stores vector embeddings and metadata in JSON files on disk while maintaining an in-memory index for fast similarity search. Uses a hybrid architecture where the file system serves as the persistent store and RAM holds the active search index, enabling both durability and performance without requiring a separate database server. Supports automatic index persistence and reload cycles.
Unique: Combines file-backed persistence with in-memory indexing, avoiding the complexity of running a separate database service while maintaining reasonable performance for small-to-medium datasets. Uses JSON serialization for human-readable storage and easy debugging.
vs alternatives: Lighter weight than Pinecone or Weaviate for local development, but trades scalability and concurrent access for simplicity and zero infrastructure overhead.
Implements vector similarity search using cosine distance calculation on normalized embeddings, with support for alternative distance metrics. Performs brute-force similarity computation across all indexed vectors, returning results ranked by distance score. Includes configurable thresholds to filter results below a minimum similarity threshold.
Unique: Implements pure cosine similarity without approximation layers, making it deterministic and debuggable but trading performance for correctness. Suitable for datasets where exact results matter more than speed.
vs alternatives: More transparent and easier to debug than approximate methods like HNSW, but significantly slower for large-scale retrieval compared to Pinecone or Milvus.
Accepts vectors of configurable dimensionality and automatically normalizes them for cosine similarity computation. Validates that all vectors have consistent dimensions and rejects mismatched vectors. Supports both pre-normalized and unnormalized input, with automatic L2 normalization applied during insertion.
reor scores higher at 48/100 vs vectra at 41/100. reor leads on adoption, while vectra is stronger on quality and ecosystem.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Unique: Automatically normalizes vectors during insertion, eliminating the need for users to handle normalization manually. Validates dimensionality consistency.
vs alternatives: More user-friendly than requiring manual normalization, but adds latency compared to accepting pre-normalized vectors.
Exports the entire vector database (embeddings, metadata, index) to standard formats (JSON, CSV) for backup, analysis, or migration. Imports vectors from external sources in multiple formats. Supports format conversion between JSON, CSV, and other serialization formats without losing data.
Unique: Supports multiple export/import formats (JSON, CSV) with automatic format detection, enabling interoperability with other tools and databases. No proprietary format lock-in.
vs alternatives: More portable than database-specific export formats, but less efficient than binary dumps. Suitable for small-to-medium datasets.
Implements BM25 (Okapi BM25) lexical search algorithm for keyword-based retrieval, then combines BM25 scores with vector similarity scores using configurable weighting to produce hybrid rankings. Tokenizes text fields during indexing and performs term frequency analysis at query time. Allows tuning the balance between semantic and lexical relevance.
Unique: Combines BM25 and vector similarity in a single ranking framework with configurable weighting, avoiding the need for separate lexical and semantic search pipelines. Implements BM25 from scratch rather than wrapping an external library.
vs alternatives: Simpler than Elasticsearch for hybrid search but lacks advanced features like phrase queries, stemming, and distributed indexing. Better integrated with vector search than bolting BM25 onto a pure vector database.
Supports filtering search results using a Pinecone-compatible query syntax that allows boolean combinations of metadata predicates (equality, comparison, range, set membership). Evaluates filter expressions against metadata objects during search, returning only vectors that satisfy the filter constraints. Supports nested metadata structures and multiple filter operators.
Unique: Implements Pinecone's filter syntax natively without requiring a separate query language parser, enabling drop-in compatibility for applications already using Pinecone. Filters are evaluated in-memory against metadata objects.
vs alternatives: More compatible with Pinecone workflows than generic vector databases, but lacks the performance optimizations of Pinecone's server-side filtering and index-accelerated predicates.
Integrates with multiple embedding providers (OpenAI, Azure OpenAI, local transformer models via Transformers.js) to generate vector embeddings from text. Abstracts provider differences behind a unified interface, allowing users to swap providers without changing application code. Handles API authentication, rate limiting, and batch processing for efficiency.
Unique: Provides a unified embedding interface supporting both cloud APIs and local transformer models, allowing users to choose between cost/privacy trade-offs without code changes. Uses Transformers.js for browser-compatible local embeddings.
vs alternatives: More flexible than single-provider solutions like LangChain's OpenAI embeddings, but less comprehensive than full embedding orchestration platforms. Local embedding support is unique for a lightweight vector database.
Runs entirely in the browser using IndexedDB for persistent storage, enabling client-side vector search without a backend server. Synchronizes in-memory index with IndexedDB on updates, allowing offline search and reducing server load. Supports the same API as the Node.js version for code reuse across environments.
Unique: Provides a unified API across Node.js and browser environments using IndexedDB for persistence, enabling code sharing and offline-first architectures. Avoids the complexity of syncing client-side and server-side indices.
vs alternatives: Simpler than building separate client and server vector search implementations, but limited by browser storage quotas and IndexedDB performance compared to server-side databases.
+4 more capabilities