pgvector vs wicked-brain
Side-by-side comparison to help you choose.
| Feature | pgvector | wicked-brain |
|---|---|---|
| Type | Framework | Repository |
| UnfragileRank | 46/100 | 32/100 |
| Adoption | 1 | 0 |
| Quality | 0 | 0 |
| Ecosystem | 0 |
| 1 |
| Match Graph | 0 | 0 |
| Pricing | Free | Free |
| Capabilities | 13 decomposed | 11 decomposed |
| Times Matched | 0 | 0 |
Implements four distinct vector data types (vector/float32, halfvec/float16, sparsevec/sparse, bit/binary) as first-class PostgreSQL types via custom type system integration in src/vector.c, src/halfvec.c, src/sparsevec.c, and src/bitvector.c. Each type includes input/output functions, binary serialization (vector_recv/vector_send), and automatic casting between formats, enabling memory-efficient storage of embeddings directly in table columns alongside relational data without external serialization.
Unique: Implements four vector types (float32, float16, sparse, binary) as native PostgreSQL types with automatic casting and binary serialization, rather than storing vectors as JSON/BYTEA blobs. This enables query planner optimization and direct operator dispatch without deserialization overhead.
vs alternatives: Faster than Pinecone/Weaviate for queries combining vector similarity with relational filters because vectors are stored inline with row data, eliminating network round-trips and join operations.
Provides six distance metrics (L2 Euclidean, inner product, cosine, L1 Manhattan, Hamming, Jaccard) exposed as SQL operators (<->, <#>, <=>, <+>, <~>, <%>) with C implementations in src/vector.c using CPU-specific SIMD dispatch (AVX-512, AVX2, SSE2 fallback). Each operator is registered as a PostgreSQL operator class enabling index-aware query planning and automatic selection of the fastest implementation for the host CPU architecture.
Unique: Implements CPU-aware SIMD dispatch (AVX-512 > AVX2 > SSE2) at runtime, selecting the fastest distance implementation for the host CPU without recompilation. Operators are registered as PostgreSQL operator classes, enabling the query planner to push distance calculations into index scans.
vs alternatives: Faster than Redis/Elasticsearch for distance calculations because SIMD operations execute in-process without serialization, and query planner can optimize distance computation order based on selectivity.
Integrates with PostgreSQL's VACUUM process to maintain index consistency as vectors are inserted, updated, or deleted. VACUUM removes deleted vectors from indexes and reclaims space, while INSERT/UPDATE operations incrementally update HNSW graph structure or IVFFlat cluster assignments. Index maintenance is automatic and transparent — no manual index rebuild required for normal operations. VACUUM can be run manually or automatically via autovacuum daemon, with configurable aggressiveness via vacuum_cost_delay and related parameters.
Unique: Integrates index maintenance into PostgreSQL's VACUUM process, enabling automatic cleanup of deleted vectors and incremental index updates without manual intervention. Maintenance is transparent and requires no application code changes.
vs alternatives: More reliable than manual index maintenance because VACUUM is integrated into PostgreSQL's transaction system, ensuring consistency between table and index state even during concurrent operations.
pgvector works with any PostgreSQL client library (psycopg2 for Python, pg for Node.js, pq for Go, etc.) via the standard PostgreSQL wire protocol. Vector types are transmitted as binary data using PostgreSQL's vector_send/vector_recv functions, requiring no special client-side code beyond standard parameterized queries. Clients can pass vectors as text literals (e.g., '[0.1, 0.2, 0.3]') or binary data, with automatic conversion handled by pgvector's type system.
Unique: Works with any PostgreSQL client library without requiring language-specific adapters, leveraging the standard PostgreSQL wire protocol for vector transmission. This enables seamless integration into polyglot applications.
vs alternatives: More flexible than specialized vector DB clients because pgvector uses standard PostgreSQL protocols, enabling use from any language with PostgreSQL support without vendor-specific SDKs.
Supports automatic and explicit casting between vector types (vector ↔ halfvec ↔ sparsevec ↔ bit) via PostgreSQL's CAST system. Casting from float32 to float16 rounds to nearest representable value (7 significant digits), casting to sparse requires external sparsification, and casting to binary uses threshold-based quantization. Casts are implemented in src/vector.c and registered via CREATE CAST statements, enabling implicit conversion in some contexts and explicit conversion via CAST() operator.
Unique: Implements type casting between four vector formats (float32, float16, sparse, binary) via PostgreSQL's CAST system, enabling format conversion without re-computing embeddings. Casting is lossy in some directions (float32 → float16, float32 → bit) but enables memory optimization.
vs alternatives: More flexible than specialized vector DBs because PostgreSQL's CAST system enables arbitrary format conversions, allowing experimentation with different representations without data movement.
Implements Hierarchical Navigable Small World (HNSW) index as a PostgreSQL access method (hnswhandler in src/index.c) supporting approximate nearest neighbor search with configurable M (max connections per node) and ef_construction (search width during build) parameters. Index is built incrementally during INSERT operations and supports parallel construction via PostgreSQL's parallel index build framework, storing the hierarchical graph structure in PostgreSQL's B-tree storage with layer information and neighbor lists.
Unique: Implements HNSW as a native PostgreSQL access method with full integration into the query planner and WAL replication system. Supports parallel index construction via PostgreSQL's parallel workers, and stores the hierarchical graph structure directly in PostgreSQL's storage layer rather than as external files.
vs alternatives: More reliable than Pinecone for mission-critical systems because HNSW indexes participate in PostgreSQL transactions, point-in-time recovery, and replication — no separate index durability concerns.
Implements Inverted File Flat (IVFFlat) index as a PostgreSQL access method (ivfflathandler in src/index.c) using k-means clustering to partition vectors into lists, storing cluster centroids and flat lists of vectors per cluster. Query execution performs exact distance calculation only within the top-k nearest clusters (determined by ef_search parameter), reducing search space from full dataset to typically 1-5% of vectors. Index is built via k-means clustering during CREATE INDEX and supports list-level parallelization during queries.
Unique: Uses k-means clustering to partition vectors into inverted lists, then performs exact distance calculation only within top-k nearest clusters. This approach trades recall for memory efficiency and index build speed, making it suitable for billion-scale deployments where HNSW memory overhead is prohibitive.
vs alternatives: More memory-efficient than HNSW for 10M+ vectors (1-2x vs 8-12x overhead), and faster to build (O(n) vs O(n log n)), making it better for cost-sensitive cloud deployments where storage is the primary constraint.
Enables combining vector similarity queries with standard SQL WHERE clauses via PostgreSQL's query planner, which can push distance calculations into index scans and apply relational filters before or after index lookups. The planner estimates selectivity of both vector and relational predicates, choosing between index-first (if vector predicate is selective) or filter-first (if relational predicate is selective) execution strategies. Supports re-ranking patterns where approximate index results are re-scored with exact distance calculations.
Unique: Leverages PostgreSQL's query planner to optimize execution order of vector and relational predicates based on estimated selectivity. Supports re-ranking patterns where approximate index results are re-scored with exact distance calculations, enabling multi-stage ranking pipelines.
vs alternatives: More flexible than specialized vector DBs (Pinecone, Weaviate) because PostgreSQL's query planner can optimize arbitrary combinations of vector and relational predicates, rather than being limited to pre-defined filter types.
+5 more capabilities
Indexes markdown files containing code skills and knowledge into a local SQLite database with FTS5 (Full-Text Search 5) enabled, enabling semantic keyword matching without vector embeddings or external infrastructure. The system parses markdown structure (headings, code blocks, metadata) and builds inverted indices for fast retrieval of skill documentation by natural language queries. No external vector DB or embedding service required — all indexing and search happens locally.
Unique: Uses SQLite FTS5 for keyword-based retrieval instead of vector embeddings, eliminating dependency on external embedding services (OpenAI, Cohere) and vector databases while maintaining sub-millisecond local search performance
vs alternatives: Simpler and faster to set up than Pinecone/Weaviate RAG stacks for developers who prioritize zero infrastructure over semantic similarity
Retrieves indexed skills from the local SQLite database and injects them into the context window of AI coding CLIs (Claude Code, Cursor, Gemini CLI, GitHub Copilot) as formatted markdown or structured prompts. The system acts as a middleware layer that intercepts queries, searches the skill index, and prepends relevant documentation to the AI's input context before sending to the LLM. Supports multiple CLI integrations through adapter patterns.
Unique: Implements RAG-like behavior without vector embeddings by using FTS5 keyword matching and injecting matched skills directly into CLI context windows, designed specifically for AI coding assistants rather than generic LLM applications
vs alternatives: Lighter weight than full RAG pipelines (no embedding model, no vector DB) while still enabling skill-aware code generation in popular AI CLIs
Provides a command-line interface for managing the skill library (add, remove, search, list, export) without requiring programmatic API calls. Commands include `wicked-brain add <file>`, `wicked-brain search <query>`, `wicked-brain list`, `wicked-brain export`, enabling developers to manage skills from the terminal. Supports piping and scripting for automation.
pgvector scores higher at 46/100 vs wicked-brain at 32/100. pgvector leads on adoption and quality, while wicked-brain is stronger on ecosystem.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Unique: Provides a full-featured CLI for skill management (add, search, list, export) enabling terminal-based workflows and shell script integration without requiring a GUI or API client
vs alternatives: More scriptable and automation-friendly than GUI-based knowledge management tools
Provides a structured system for organizing, storing, and versioning coding skills as markdown files with optional metadata (tags, difficulty, language, category). Skills are stored in a flat or hierarchical directory structure and can be edited directly in any text editor. The system tracks which skills are indexed and provides utilities to add, update, and remove skills from the index without requiring a database UI or special tooling.
Unique: Treats skills as first-class markdown files with Git versioning rather than database records, enabling developers to manage their knowledge base using standard text editors and version control workflows
vs alternatives: More portable and version-control-friendly than proprietary knowledge base tools (Notion, Obsidian plugins) while remaining compatible with standard developer workflows
Executes all knowledge indexing and retrieval operations locally on the developer's machine using SQLite FTS5, eliminating the need for external services, API keys, or cloud infrastructure. The entire skill database is stored as a single SQLite file that can be backed up, versioned, or shared via Git. No network calls, no rate limits, no vendor lock-in — all operations complete in milliseconds on local hardware.
Unique: Deliberately avoids external dependencies (vector DBs, embedding APIs, cloud services) by using only SQLite FTS5, making it the only RAG-adjacent system that requires zero infrastructure setup or API credentials
vs alternatives: Eliminates operational complexity and cost of vector database services (Pinecone, Weaviate) while maintaining offline-first privacy guarantees that cloud-based RAG systems cannot provide
Provides an extensible adapter pattern for integrating the skill library with multiple AI coding CLIs through standardized interfaces. Each CLI adapter handles the specific protocol, context format, and API of its target tool (Claude Code's prompt format, Cursor's context injection, Gemini CLI's request structure). New adapters can be added by implementing a simple interface without modifying core indexing logic.
Unique: Uses adapter pattern to abstract CLI-specific integration details, allowing a single skill library to work across Claude Code, Cursor, Gemini CLI, and custom tools without duplicating indexing or retrieval logic
vs alternatives: More flexible than CLI-specific plugins because adapters are decoupled from core indexing, enabling skill library reuse across tools without reimplementing search
Converts natural language queries into FTS5 search expressions by tokenizing, normalizing, and optionally expanding queries with synonyms or related terms. The system handles common query patterns (e.g., 'how do I X' → search for skill tags matching X) and applies FTS5 operators (AND, OR, phrase matching) to improve precision. No machine learning or semantic models — purely lexical matching with heuristic query expansion.
Unique: Implements heuristic-based query expansion for FTS5 to handle natural language variations without semantic embeddings, using rule-based synonym mapping and query pattern recognition
vs alternatives: Simpler and faster than semantic search (no embedding inference latency) while still handling common query variations through configurable synonym expansion
Parses markdown skill files to extract structured metadata (title, description, tags, language, difficulty, category) from frontmatter (YAML/TOML) or markdown conventions (heading levels, code fence language tags). Metadata is indexed alongside skill content, enabling filtered searches (e.g., 'find all Python skills tagged with async'). Supports custom metadata fields through configuration.
Unique: Extracts metadata from markdown structure (YAML frontmatter, code fence language tags, heading levels) rather than requiring a separate metadata file, keeping skills self-contained and editable in any text editor
vs alternatives: More portable than database-based metadata (Notion, Obsidian) because metadata lives in the markdown file itself and is version-controllable
+3 more capabilities