Struct vs Relativity
Side-by-side comparison to help you choose.
| Feature | Struct | Relativity |
|---|---|---|
| Type | Product | Product |
| UnfragileRank | 27/100 | 32/100 |
| Adoption | 0 | 0 |
| Quality | 0 | 1 |
| Ecosystem | 0 | 0 |
| Match Graph | 0 | 0 |
| Pricing | Free | Paid |
| Capabilities | 9 decomposed | 13 decomposed |
| Times Matched | 0 | 0 |
Converts unstructured text documents into dense vector embeddings and indexes them in a vector database, enabling semantic similarity search that retrieves results based on meaning rather than keyword matching. Uses embedding models (likely OpenAI or similar) to transform documents and queries into comparable vector space, then performs approximate nearest-neighbor search to return contextually relevant results ranked by cosine similarity or similar distance metrics.
Unique: Combines vector search with SEO-optimized knowledge page generation in a single product, eliminating the typical workflow of managing a separate vector database (Pinecone, Weaviate) and a content platform (Notion, Confluence) — the integration point is built-in rather than requiring custom orchestration
vs alternatives: Faster time-to-value than building custom semantic search on Pinecone or Elasticsearch because indexing and search are pre-configured; more semantic-aware than traditional keyword search in Confluence or Notion but less customizable than pure vector databases
Automatically generates or transforms indexed knowledge base content into SEO-optimized HTML pages with structured metadata (meta tags, Open Graph, schema markup), heading hierarchy, and internal linking suggestions. Likely uses templates and heuristics to inject keywords, optimize title/description length, and structure content for search engine crawlability while maintaining readability. Pages are generated from indexed vector content, creating a feedback loop where search-relevant documents become discoverable pages.
Unique: Tightly couples semantic search indexing with SEO page generation, treating search-relevance and search-engine-discoverability as a unified problem rather than separate workflows — pages are generated from vector-indexed content, ensuring consistency between what users find via semantic search and what Google finds via crawling
vs alternatives: Eliminates manual SEO optimization work that Notion, Confluence, or static site generators require; more automated than Docusaurus or MkDocs but less customizable than hand-tuned SEO in custom-built documentation sites
Accepts unstructured knowledge base content (documentation, FAQs, help articles) in multiple formats and automatically parses, chunks, and indexes it into the vector search system. Likely uses document parsing libraries to extract text from markdown/HTML, applies chunking strategies (sliding windows, semantic boundaries) to create indexable units, and batches embedding generation. Metadata extraction (title, URL, category) is preserved for ranking and filtering.
Unique: Ingestion is tightly integrated with vector indexing — no separate ETL step or external pipeline required; documents are parsed, chunked, embedded, and indexed in a single workflow managed by the platform
vs alternatives: Simpler than building custom ingestion pipelines with LangChain or Llama Index because chunking and embedding are pre-configured; more opinionated than pure vector databases like Pinecone, which require you to manage ingestion separately
Enables filtering search results by document metadata (category, tags, author, date, URL path) and supports faceted navigation to narrow results without re-querying. Likely stores metadata alongside embeddings and applies post-retrieval filtering or pre-filters the vector search space. Facets are dynamically generated from indexed content, allowing users to explore knowledge base structure without keyword queries.
Unique: Metadata filtering is built into the search interface rather than a separate query parameter — facets are dynamically generated from indexed content and presented as part of the search UI, creating an exploratory search experience
vs alternatives: More user-friendly than Elasticsearch faceted search because filtering is pre-configured; less flexible than Algolia's faceting because metadata schema is fixed
Ranks search results by relevance using vector similarity scores and optional secondary signals (metadata recency, document popularity, click-through data). Likely uses cosine similarity or dot-product scoring on embeddings, with optional boosting for high-quality or frequently-accessed documents. Relevance tuning may expose simple controls (boost by category, date decay) without requiring model retraining.
Unique: Ranking is implicit in the vector search layer — results are ordered by embedding similarity without explicit ranking configuration, though secondary signals may be available as simple tuning knobs rather than a full ranking framework
vs alternatives: Simpler than Elasticsearch BM25 tuning or Algolia's ranking rules because vector similarity is the primary signal; less powerful than learning-to-rank systems like LambdaMART because it doesn't adapt to user behavior
Ingests and indexes knowledge content from multiple sources (uploaded files, API endpoints, web URLs, connected platforms) into a unified searchable index. Likely maintains source attribution and deduplication logic to prevent indexing the same content twice. Supports incremental updates from sources without full re-indexing, enabling continuous synchronization with external knowledge bases.
Unique: Consolidation happens at the indexing layer — multiple sources are parsed, deduplicated, and indexed into a single vector space, creating a unified search experience without requiring users to query multiple systems separately
vs alternatives: More convenient than manually managing multiple vector databases or search indices; less flexible than custom ETL pipelines because source integrations are pre-built and limited
Hosts generated knowledge pages on a public-facing domain with automatic URL routing, custom branding, and optional white-label options. Pages are served with SEO metadata, structured data, and analytics tracking. Likely uses a CDN for fast global delivery and supports custom domain configuration. Pages are dynamically generated from indexed content or pre-rendered for performance.
Unique: Hosting is integrated with knowledge page generation — pages are automatically published to a managed platform rather than requiring separate deployment to a web server or static site host, reducing operational overhead
vs alternatives: Simpler than self-hosting documentation on Vercel or GitHub Pages because deployment is automatic; less customizable than custom-built sites but faster to launch
Tracks search queries, click-through rates, and user engagement with search results to identify gaps in knowledge base coverage and popular search intents. Likely logs queries, result selections, and page dwell time, then surfaces aggregated insights (top queries, zero-result queries, trending topics). May use these signals to recommend new content or identify documentation gaps.
Unique: Analytics are built into the search platform rather than requiring external tools like Google Analytics or Mixpanel — search behavior is captured natively and surfaced as actionable insights for documentation improvement
vs alternatives: More focused on search behavior than Google Analytics because it tracks query-level data; less comprehensive than dedicated analytics platforms but integrated into the search workflow
+1 more capabilities
Automatically categorizes and codes documents based on learned patterns from human-reviewed samples, using machine learning to predict relevance, privilege, and responsiveness. Reduces manual review burden by identifying documents that match specified criteria without human intervention.
Ingests and processes massive volumes of documents in native formats while preserving metadata integrity and creating searchable indices. Handles format conversion, deduplication, and metadata extraction without data loss.
Provides tools for organizing and retrieving documents during depositions and trial, including document linking, timeline creation, and quick-search capabilities. Enables attorneys to rapidly locate supporting documents during proceedings.
Manages documents subject to regulatory requirements and compliance obligations, including retention policies, audit trails, and regulatory reporting. Tracks document lifecycle and ensures compliance with legal holds and preservation requirements.
Manages multi-reviewer document review workflows with task assignment, progress tracking, and quality control mechanisms. Supports parallel review by multiple team members with conflict resolution and consistency checking.
Enables rapid searching across massive document collections using full-text indexing, Boolean operators, and field-specific queries. Supports complex search syntax for precise document retrieval and filtering.
Relativity scores higher at 32/100 vs Struct at 27/100. However, Struct offers a free tier which may be better for getting started.
Need something different?
Search the match graph →© 2026 Unfragile. Stronger through disorder.
Identifies and flags privileged communications (attorney-client, work product) and confidential information through pattern recognition and metadata analysis. Maintains comprehensive audit trails of all access to sensitive materials.
Implements role-based access controls with fine-grained permissions at document, workspace, and field levels. Allows administrators to restrict access based on user roles, case assignments, and security clearances.
+5 more capabilities